EP0979482A1 - Alignment method and apparatus for retrieving information from a two-dimensional data array - Google Patents

Alignment method and apparatus for retrieving information from a two-dimensional data array

Info

Publication number
EP0979482A1
EP0979482A1 EP97925517A EP97925517A EP0979482A1 EP 0979482 A1 EP0979482 A1 EP 0979482A1 EP 97925517 A EP97925517 A EP 97925517A EP 97925517 A EP97925517 A EP 97925517A EP 0979482 A1 EP0979482 A1 EP 0979482A1
Authority
EP
European Patent Office
Prior art keywords
data
image
sensor
alignment
retrieval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP97925517A
Other languages
German (de)
French (fr)
Inventor
Loren Laybourn
Richard E. Blahut
James T. Russell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ioptics Inc
Original Assignee
Ioptics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ioptics Inc filed Critical Ioptics Inc
Publication of EP0979482A1 publication Critical patent/EP0979482A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light

Definitions

  • the invention concerns systems for optically storing and retrieving data stored as light altering characteri ⁇ tic ⁇ on an optical material and providing fast random access retrieval, and more particularly, to an alignment method and apparatus sensing an optical image of the data and converting same to electrical data signals.
  • Optical memories of the type having large amount* of digital data stored by light modifying characteristic ⁇ of a film or thin layer of material and accessed by optical light addressing without mechanical movement have been proposed but have not resulted in wide spread commercial application.
  • the interest in such optical recording and retrieval technology is due to its record density and faster retrieval of large amounts of data compared to that of existing electro-optical mechanisms such as optical discs, and magnetic storage such as tape and magnetic disc, all of which require relative motion of the storage medium.
  • serial accessing of data generally requires transfer to a buffer or solid state random access memory of a data processor in order to accommodate high speed data addressing and other data operations of modern computers.
  • Other storage devices such as solid state ROM and RAM can provide the relatively high access speeds that are sought, but the cost, size, and heat dissipation of such devices when expanded to relatively large data capacities limit their applications.
  • a system for storing and retrieving data from an optical image containing two dimensional data patterns imaged onto a sensor array for readout method and apparatus provided for detecting and compensating for various optical effects including translational and rotational offsets, magnification, and distortion of the data image as it is converted to electrical data by the sensor array
  • Data may be stored for example in an optical data layer capable of selectively altering light such as by changeable transmissivity, reflectivity, polarization, and/or phase
  • data bits are stored as transparent spots or cells on a thin layer of material and are illuminated by controllable light sources to project an optically enlarged data image onto an array of sensors
  • Data is organized into a plurality of regions or patches (sometimes called pages) Selective illumination of each data page and its projection onto the sensor array accesses the data page by page from a layer storing many pages, e g , of a chapter or book
  • the present invention may be used in optical memory systems described in U.S Patent No
  • the sensor array may be provided by a layer of charge coupled devices (CCDs) arrayed in a grid pattern generally conforming to the projected data page but preferably the sensor grid is s o mewhat larger than the imaged data
  • CCDs charge coupled devices
  • the data image generates charge signals that are outputted into data bucket registers underlying photosensitive elements
  • other output sensor arrays may be employed, including an array of photosensitive diodes, such as PIN type diodes
  • Focal (Z axis) error Rotational error about an origin Magnification error Distortion Focal (Z axis) misregistration can be minimized by careful optical and mechanical design as is done in the embodiment disclosed herein
  • the imaged data may be contaminated by electrical noise, by optical resolution limits and by dust or surface contamination on the data media and/or optical sensor
  • it is possible to compensate for linear misregistrations by mechanical methods such as sensor stage rotation, or mechanical (X-Y axis) translation, it is often not desirable to do so because of mechanical complexity, cost, and speed constraints
  • Nonlinear misregistrations are considerably more difficult, if not impossible, to correct mechanically
  • raw image data is sensed on a grid larger than the page image and then electronically processed to determine the true data corrected for displacement, rotation, magnification and distortion
  • the processed, corrected data is then output to memory or throughput to applications
  • the sensor structure is a two-dimensional array of larger area than the two-dimensional data image projected onto the sensor array, and the individual sensor elements are smaller and more numerous (I e , denser) than the data image symbols or spots in order to oversample the data image in both dimensions
  • two or more sensing elements are provided in both dimensions for each image spot or symbol representing data to be retrieved
  • about four sensing elements are provided in the disclosed embodiment for each image spot, and intensity values sensed by the multiple sensor elements per spot are used in oversampling and correction for mtersymbol interference
  • Each page or patch of data is further divided into zones surrounded by fiducials of known image patterns to assist in the alignment processes and gain control for variations of image intensity
  • the analog level sensed at each of the oversampling sensor elements is represented by a multibit digital value, rather than simply detecting a binary, yes or no illumination
  • the preferred embodiment includes automatic gain control (AGC) of image intensity which is initiated outboard of data zones by using AGC skirts of known image patterns and
  • Additional features of the preferred embodiment include the provision of alignment fiducials containing embedded symbols of known patterns and positions relative to the zones of data symbol positions, and the fiducial patterns have predetermined regions of maximum light and dark image content which provide periodic update of the AGC processes summarized above Using these processes, a coarse alignment method determines the approximate corner locations of each of multiple zones of data and this is followed by a second step of the location procedure by processing corner location data to find a precise corner location
  • the precise or fine corner locating scheme uses a matched filter technique to establish an exact position of a reference pixel from which all data positions are then computed
  • Alignment of the data to correct for various errors in the imaging process in the preferred embodiment uses polynomials to mathematically describe the corrected data positions relative to a known grid of the sensor array
  • These alignment processes including the generation of polynomials, make use of m-phase and quadrature spatial reference signals to modulate to a baseband a spatial timing signal embedded in the alignment fiducial which is further processed through a low pass filter to remove the spatial noise from the timing signal
  • the combination of m-phase and quadrature spatial reference signals generates an amplitude independent measure of the timing signal phase as a function of position along the fiducial
  • the preferred embodiment uses a least squares procedure to generate the best fit of a polynomial to the measured offsets
  • the coefficients of the polynomials are then used to derive alignment parameters for calculating the displacement of data spot positions due to the various misalignment effects due to the optical, structural, and electrical imperfections
  • the preferred processing uses polynomials to mathematically describe the corrected data positions relative to a known grid of the sensor array
  • the sensor employs a broad channel detection architecture enabling data of exceptionally long word length to be outputted for use in downstream data processes
  • FIG. 1 is a block diagram of the ORAM system in accordance with the preferred embodiment
  • Figure 2 shows illustrations of data media at different magnifications to show the break down of the data hierarchy from a "chapter” into “patches” (also called pages), and a “patch” (page) into “zones” and “zones” into data symbols or spots
  • Figure 3 shows a portion of a data pattern portrayed as rotated, translated, and somewhat distorted with respect to the orthogonal sensor co-ordinates (three of the several forms of image defects which the method corrects)
  • Figure 4 is an illustration of a patch with an exploded view of a corner region containing a corner symbol, two AGC "skirts" and portions of two alignment fiducials
  • Figure 5 is a flow diagram overview of the sensor and alignment/bit retrieval process
  • Figure 6 shows data patches before and after AGC
  • Figure 7 illustrates an image of a patch showing the two sets of AGC skirts
  • Figure 8 shows a comparison of possible paths for AGC analysis, when centered on the AGC skirt, the AGC process can analyze a known pattern
  • Figure 9 is a diagram of a sensor array with a patch image projected on it, showing how the sensor is divided into six sections for analysis
  • Figure 10 shows the process for finding the center of an AGC skirt
  • Figure 11 is a diagram of how AGC normalizes intensity of the patch image, illustrating that in the readout direction, the A to D converter thresholds are set by the peak and valley detection circuitry, and in the lateral direction, linear interpolation is used to set the thresholds.
  • Figure 12 is a diagram of a patch showing the regions of the patch associated with the three modes of AGC operation.
  • Figure 13 shows a section of sensor image highlighting a corner region, comer symbol, and spot or pixel in the corner used as an origin for referencing positions of nearby data symbols or spots.
  • Figure 14 shows the AGC skirts and corner symbols purposely aligned such that the row and column positions of the AGC skirt centers can be combined into coordinate pairs which become a coarse measurement of the corner symbol locations.
  • Figure 15 is a flow chart of the corner symbol convolution process.
  • Figure 16 is a fragment of the data image at the sensor showing one of the zones with corresponding fiducials including corner symbols.
  • Figure 17 is a flow chart of the data alignment process.
  • Figure 18 illustrates the placement of the filters on the alignment fiducials.
  • Figure 19 shows the typical curve for phase in x-direction as a function of x (assuming no noise).
  • Figure 20 shows values for phase in x-direction as a function of x (including noise).
  • Figure 21 shows values for phase in y-direction as a function of x (including noise).
  • Figure 22 shows linear (first order) fit to phase values.
  • Figure 23 shows quadratic (second order) fit to phase values.
  • Figure 24 is a diagram illustrating the labeling of the four fiducials surrounding a zone.
  • Figure 25 is an eye diagram showing the effects of noise, data spot interpolation and pulse slimming.
  • Figure 26 illustrates the relationship between symbol position on pixel array versus the weighting values used for interpolation.
  • Figure 27 shows the 16 regions of symbol position on the pixel and the corresponding pixel weights used for interpolation
  • Figure 28 shows the ORAM electronics receiver subsystem including sensor integrated circuit (IC)
  • Figure 29 shows relative pixel magnitude for single and grouped "ones"
  • Figure 30 is a functional block diagram of the sensor IC
  • Figure 31 shows an AGC skirt layout
  • Figure 32 shows A to D codes with respect to signal intensity
  • Figure 33 shows the signal flow on the sensoi IC of Figure 30
  • Figure 34 shows an alignment-bit-ret ⁇ eval ( ⁇ BR) IC block diagram
  • Figure 35 depicts the segmented memory design of the ABR IC
  • Figure 36 shows the 8 word adder and accumulator function
  • Figure 37 shows the zone in image memory
  • Figure 38 shows related diagrams illustrating the interpolation and pulse slimming technique
  • Figure 39 is a diagram of the output RAM buffer
  • Figure 40 is a timing diagram from request to data ready access
  • a record is made as indicated at 10a, in which user data is encoded combined with fiducials in data patterns called patches or pages that are written onro record media 19. More particularly, and as fully disclosed > n copending applications PCT/US92/11356 aid LSfSN 08/256,202, user data is enteied at 35, encoded/ECC at 3 j, whereupon data and fiducial patterns are generated 37, and written at 38 to media, such as an optical data layer capable of selectively alternating light in one or more of the above described ways
  • the data layer 19 thus prepared is then fabricated at 39 in combination with a lens array 21 to form media/lens cartridge
  • the image of a two-dimensional data field, as written by E-beam on a chromium coated quartz media substrate To retrieve the data from the record, the media, lens cartridge 17 is removably placed in an ORAM reader indicated at 10b and the data from each patch or page is selectively back-illuminated so as to be projected onto a sensor 27
  • system controller 125 coordinates the operations of a read source 124, alignment/bit retrieval processor 32, and decode and ECC 127
  • a lens system focuses the image onto a sensor array 27 which converts light energy into an electrical signal
  • this signal is first sensed by analog circuitry, then converted to a digital representation of the image
  • This digital image representation is stored in RAM 30 whereupon it is operated on by the retrieval algorithms processor indicated at 32
  • the digitized image is processed to correct for mechanical, electrical, and optical imperfections and impairments, then converted to data and ECC at 127, and the data presented to the user via user interface 123
  • the symbols (or spots) making up the pages of the record are disclosed in this embodiment as bits of binary value, however, the invention is also useful for non-binary symbols or spots including grayscale, color, polarization or other changeable characteristics of the smallest changeable
  • the two requirements for the sensor array are (1) that it be somewhat larger in both X and Y dimensions than the image projected on it to allow for some misregistration without causing the data image to fall outside the active sensor region, and (2) that it have a pixel density in both the row and column dimension which is greater than the density of the projected symbol image so as to be sufficient to recover the data, and in this embodiment it is approximately twice the symbol count projected on it (The sensor hardware design providing this function is detailed in Section 4 1 )
  • the alignment method described in this disclosure will locate the image data array on the sensor, determine the position of each individual data symbol in the image relative to the known sensor grid, and determine the digital value of each bit
  • a fundamental purpose of the herein disclosed alignment method and apparatus is to determine the spatial relationship between the projected image of the data array and the sensor array
  • the grid of the sensor array is formed by known locations of the sensing cells or elements which are sometimes called pixels in the following description
  • a user request for data initiates an index search in RAM to determine the address of the patch(es) containing the desired data
  • the light source serving this data address is illuminated, projecting an image of the desired data through the optical system and onto the sensor This image, projected on the sensor, is the input data for the alignment and bit retrieval apparatus
  • AGC Automatic gain control
  • ADC analog to digital converters
  • Amplifier gain is set based on the intensity read from predetermined "AGC regions" spaced throughout the data pattern
  • AGC regions spaced throughout the data pattern
  • AGC “Skirts” located on the perimeter of the data patch “AGC Skirts” are the first illuminated pixels encountered as the array is read out They provide an initial measure of intensity as image processing begins
  • AGC "marks" located in the alignment fiducials along each side of each data zone AGC marks are used to update the amplifier gain as successive rows are read out from the sensor array.
  • the AGC skirts are used both to predict the locations of the AGC regions on the image plane and to set the initial gain of the ADCs This is completed prior to processing the pixels corresponding to data symbol positions on the image
  • Figure 7 depicts an entire patch of 21 data zones
  • the data zones on the top and left edge of the patch have AGC skirts aligned with their respective fiducial regions
  • There are two sets of AGC skirts, one along the top and one along the side Dual sets of skirts enable bi-directional processing of the image and provide reference points for estimating the positions of the Corner Symbols (discussed below.)
  • the AGC process consists of three operations
  • Operation 2 Determining the center of the AGC skirt regions Operation 3) Performing the AGC function Operations 1 and 2 constitute a spatial synchronization process directing the AGC circuitry to the AGC regions Synchronizing the AGC circuitry to the AGC regions allows gain control, independent of data structure see Figure 8 During Operations 1 and 2, the threshold values for the A to D converters are set with default values During Operation 3, the AGC process sets the A to D converter thresholds
  • each row of the sensor is analyzed starting from the top edge
  • Each pixel row is read in succession and divided into six separate sections for analysis ( Figure 9)
  • the algorithm defines the AGC skirt to be located when a specified number of neighboring pixels display an amplitude above a default threshold
  • an AGC skirt is considered located when four out of five neighboring pixel values are higher than the threshold
  • the AGC operation must accommodate the fact that the AGC skirts for sections 1 and 6 are encountered later in the readout of the sensor than those in sections 2- ⁇
  • the AGC process is performed in three stages (see Figure 12)
  • the first stage AGC skirts in sections 2-5 are located and their centers determined
  • stage 2 the AGC skirts in sections 1 and 6 are located and their centers found while the first three zones encountered (in sections 2 - 5) are undergoing intensity normalization
  • stage 2 the center of the AGC skirts in all sections have been located, and the entire width of the sensor undergoes intensity normalization as each row of the sensor is read out
  • the corner locating algorithm is performed in two steps a) Coarse Corner Location (defines a region in which the reference pixel (origin) will be found ) b) True Corner Location (exactly selects the reference pixel.)
  • the above two steps, in combination, function to locate all the corner symbols for the entire patch
  • Each Corner Symbol acts as a reference point for analyzing the fiducial patterns
  • the location of a reference point (sensor pixel location, point (Rc.Cc) in Figure 13) also acts as an origin from which all displacement computations are made within that zone Four corner symbols are associated with each zone, but only one of the four is defined as the origin for that zone In the current embodiment, the zone's upper left corner symbol is used
  • the coarse corner location process is a fast, computationally inexpensive, method of finding corner locations withm a few pixels.
  • the true corner location process locates the reference pixel of the corner symbol with greater precision Using the coarse corner location process to narrow the search, minimizes the computational overhead required
  • the coarse corner location involves locating the column positions of the AGC skirt centers at the top of the patch, and the row positions of the AGC skirts on the side of the patch These coordinates in the 'row' and 'column' directions combine to give the coarse corner locations (see Figure 13 and Figure 15)
  • the reference pixel origin R c , C c (see Figure 13) is the pixel location on the sensor array where convolution with the spatial filter yields a maximum value
  • the convolution process in the flow chart of Figure 15 is carried out in process steps 50-69 as shown
  • each fiducial region is processed and the alignment parameters for each zone Z 1 21 are determined
  • the alignment algorithm determines the alignment parameters for each zone Z 1 21 by processing patterns embedded in the fiducials bordering that zone
  • the fiducials contain regions of uniformly spaced symbol patterns These regions provide a two-dimensional, periodic signal
  • the alignment algorithm measures the phase of this signal in both the row and column directions at several points along the fiducial A polynomial is fit to the set of phase values obtained at these points using a "least squares" analysis The polynomial coefficients obtained in the least squares process are then used to determine the alignment parameters As seen in Figures 16 and 24, four fiducials t, b, r, 1 are associated with every zone (one on each of four sides) Depending on the image quality, any combination from one to four fiducials could be used to calculate alignment parameters for the zone. The described embodiment uses all four. Using fewer reduces processing overhead with some corresponding reduction in accuracy.
  • the first step in determining the alignment parameters involves a spatial filtering process.
  • the periodic signal resulting from the periodic symbol patterns in the fiducial is multiplied by a reference signal to generate a difference signal. This is done twice with two reference signals such that the two resulting difference signals are in phase quadrature.
  • the signals are then filtered to suppress sum frequencies, harmonic content, and noise.
  • the filtering process involves summing pixel values from a region on the fiducial.
  • the pixel values summed are first weighted by values in a manner that corresponds to multiplying the fiducial signal by the reference signals. In this way, the multiplication and filtering operations are combined.
  • the filter is defined by the extent of the pixel region summed, and multiplication by a reference signal is accomplished by weighting the pixel values.
  • Figure 18 illustrates this combined multiplication and filtering process for each of the x and y components.
  • the next step is to take the arc tangent of the ratio of quadrature to in-phase component.
  • the result is the signal phase.
  • the In phase component is defined:
  • phase of the signal can now be determined by taking the arctangent:
  • a convenient way of describing the alignment is to plot the phase of the fiducial signal as a function of position.
  • Figure 19 shows an example of phase plots for the signal in the row and column directions. Some noise will be present in any actual phase measurements.
  • Figures 20 and 21 are examples of typical x and y direction phase plots.
  • a polynomial is used to describe the curve. The coefficients of the polynomial are estimated using a least squares analysis.
  • the first step in performing the least squared error fit is to choose the order of the curve used to fit the data.
  • Figures 22 and 23 illustrate fitting first and second order curves to the phase data. While other functions could be used to fit the data, the preferred process uses polynomials which simplifies the least squares calculations for derivation of the coefficients.
  • the least squares error fit involves deriving the coefficients of the polynomial terms
  • Each of the four alignment fiducials bordering a zone ( Figure 24) are analyzed and for each fiducial, a separate phase curve is generated for its x and y components.
  • the curves are generated using the filtering processes shown in Figure 18.
  • the vertical fiducials are processed in equivalent manner with the appropriate coordinated transformation.
  • the coefficients for each polynomial fit are converted to alignment parameters. Eight sets of alignment parameters are generated. The eight sets of alignment parameters are designated using a "t" for top fiducial, "b” for bottom fiducial, "r” for right fiducial, and "1" for left fiducial.
  • the pixel values associated with data symbols are further processed by interpolation and pulse slimming to reduce the signal noise due to intersymbol interference (ISI).
  • ISI intersymbol interference
  • ISI refers to the image degradation resulting from the image of one symbol position overlapping that of its nearest neighbors. ISI increases the signal to noise ratio (SNR) required for proper bit detection. ISI is encountered in one-dimensional encoding schemes in which the symbol size in the recording direction (e.g., along the "linear" track of a magnetic tape or an optical disk,) is greater than the symbol-to-symbol spacing.
  • the "eye” is the region of values where there is no combination of symbol patterns that can overlap in such a way as to produce a value at that location It is in the eye region that the threshold value is set to differentiate between the presence of a symbol and the absence of a symbol Ideally, to decide whether or not a symbol is present, the threshold value is set to the value halfway between the upper and lower boundaries of the eye diagram ( Figure 25a)
  • Noise added to the signal has the effect of making the edges of the eye somewhat "fuzzy”
  • fuzzy is used here to describe the statistical aspect of noise that changes the actual amplitude of the signal
  • the alignment algorithm has the accuracy to position the center of a symbol image with at least the precision of ⁇ 1/4 pixel Interpolation is invoked to account for the variation in energy distribution of a symbol image across the pixels (Figure 2 ⁇ c) This variation is due to the variable location of the symbol image relative to the exact center of the pixel If a symbol is centered over a single pixel, the majority of the energy associated with that symbol will be found in that pixel If the center of the symbol falls between pixels, the energy associated with that symbol will be distributed between multiple pixels ( Figure 26)
  • a weighted summation of a 3 x 3 array of pixels is used as a measurement of the symbol energy
  • the 9 pixels in the array are chosen such that the calculated true symbol center lies somewhere within the central pixel of the 3 x 3 array
  • This central pixel location is subdivided into 16 regions, and depending on in which region the symbol is centered, a predetermined weighting is used in summing up the 3 x 3 array.
  • Figure 27 shows the location of the 16 regions on a pixel and their nine corresponding weighting patterns.
  • weights 0, ".25”, “.5", and "1" are chosen in this embodiment to minimize binary calculation complexity. (Each of these weights can be implemented by applying simple bit shifts to the pixel values.) In general, other weighting strategies could be used.
  • pulse slimming estimate the influence of neighboring symbols and subtracts the signal contribution due to their overlap from the signal read from the current sensor pixel being processed. It is an important feature of the preferred embodiment to perform pulse slimming after interpolation, that is after the data are corrected for pixel position with reference to the sensor grid. Pulse slimming reduces the effect of the overlap thereby increasing the size of the "eye" (see Figure 25d).
  • One method of assessing the effect of neighboring symbols is to estimate their position and subtract a fraction of the pixel value at these estimated neighboring positions from the value at the current pixel under study.
  • One implementation subtracts one eighth of the sum of the pixel values two pixels above, below, and on each side of each pixel in the zone being processed.
  • a 1 or 0 decision for each potential symbol location is made by comparing the magnitude of the processed symbol value (after pulse slimming and interpolation) to a threshold. If the corrected pixel value is below the threshold (low light), a "zero” is detected. If the corrected value is above the threshold value (high light), a "one” is detected. 3.9.
  • STEP 9 PERFORM ADDITIONAL ERROR DETECTION AND CORRECTION (EDAC)
  • the sensor IC of Figure 28 combines sensor 27 and image digitizer 29 and converts photonic energy (light) into an electronic signal (an analog process)
  • the sensor IC 27 includes an array 27a of sensing elements (pixels) arranged in a planar grid placed at the focal plane of the data image and senses light incident on each element or pixel The accumulated pixel charges are sequentially shifted to the edge of pixel array and preamphfied
  • the analog voltage level at each pixel is digitized with three bits (eight levels) of resolution
  • This accumulated digital representation of the image is then passed to the ABR IC which combines the functions of RAM 30 and the alignment/bit retrieval algorithm shown in Figure 1 Data Alignment and Bit Retrieval (ABR IC)
  • the ABR IC of Figure 28 is a logical module or integrated circuit which is purely digital in nature
  • the function of this module is to mathematically correct the rotation, magnification, and offset errors in the data image in an algorithmic manner (taking advantage of embedded features in the data image called fiducials)
  • data is extracted by examining the amplitude profiles at each projected symbol location Random access memory (RAM) 30 which in this embodiment is in the form of a fast SRAM holds the digitized data image from the sensor IC, and specific processing performs the numerical operations and processes described herein for image alignment and data bit retrieval
  • Sensor IC is made up of silicon light sensing elements Photons incident on silicon strike a crystal lattice creating electron-hole pairs These positive and negative charges separate from one another and collect at the termini of the field region producing a detectable packet of accumulated charge
  • the charge level profile produced is a representation of light intensity profiles (the data image) on the two-dimensional sensor plane
  • the sensor plane is a grid of distinct (and regular) sensing cells called pixels which integrate the generated charge into spatially organized samples
  • Figure 29 shows, graphically, how the light intensity of the image (shown as three-dimensional profiles) affects the pixel signal magnitude
  • Pixel signal magnitude is a single valued number representative of the integrated image intensity (energy) profile over the pixel
  • Magnification and registration tolerances and guardband define the required sensor array dimensions
  • the sensor 27 ( Figure 28) must be large enough to contain the complete image in the event of maximum magnification (specified in this example to be 22 to 1) and worst case registration error (specified to be less than +/ lOO ⁇ , in both the x and y direction) Since the data patch on the media is 354 x 354 l ⁇ spaced symbols, the patch image on the sensor can be as large as 7788 ⁇ Adding double the maximum allowable offset (200 ⁇ ,) to allow for either positive or negative offset, requires the sensing array to be at least 7988 ⁇ wide, or 799-10 ⁇ pixels
  • the Sensor IC design specifies an 800 x 800 pixel array
  • a preamplifier 80 converts signal charge to a voltage sufficient to operate typical processing circuitry here provided by digitizer and logic 29 followed by output buffers 82
  • the sensor IC architecture ( Figure 30) specifies a preamplifier 80 for each row of pixels Since entire columns of data are read out with each charge couple device (CCD) cycle (one pixel per row across all 800 rows), the CCD operating frequency is a key parameter determining system performance
  • CCD charge couple device
  • the CCD clock operates at 10 Mhz Designing output circuitry for every pixel row multiplies the per cycle throughput of a standard full frame imager by the number of rows In the preferred embodiment, this has the effect of increasing system performance by a factor of 800 System noise is predominately a function of preamplifier design, therefore, careful attention is paid to the design and construction of the preamplifier Important preamplifier parameters are gain, bandwidth and input capacitance Gain must produce sufficient output signal relative
  • Suitable preamplifier designs are known and selected to meet the following specifications Preamp Performance
  • the automatic gain control (AGC) scheme maximizes system performance by maximizing the dynamic range of image digitization, enhancing system accuracy and speed
  • the image amplitude (intensity) is monitored at predetermined points (AGC skirts) and this information is used to control the threshold levels of the A to D converters
  • the signal is primarily background noise, because by design, the image is aimed at the center of the sensor 27 and readout begins at the edge, which should be dark
  • the first signal encountered is from the image of the leading edge of the AGC skirt (see Figure 31)
  • the AGC skirt image is a 5 x 9 array of all "ones" and therefore transmits maximal light
  • the amplitude read from pixels imaging these features represents the maximum intensity expected anywhere on the full surface
  • a logic block in digitizer and logic 29 is designed to detect these peak value locations and under simple control, select the pixel row most closely aligned to the AGC features
  • the difference between the maximum and minimum signals represents the total A to D range, and accordingly sets the weight for each count
  • the value of the minimum signal represents the DC offset (or background light) present in the image This offset is added to the A to D threshold
  • the sensor IC 27,29 including CCDs performs the digitization following preamphfication
  • the ORAM embodiment described herein utilizes three bits (eight levels) of quantization indicated in Figure 32.
  • each preamplifier 80 output feeds directly into an A to D block, so there is an A to D per pixel row
  • the design here uses seven comparators with switched capacitor offset correction Thresholds for these comparators are fed from a current source which forces an array of voltages across a series of resistors.
  • the value of the thresholds are controlled by a network of resistors common to all pixel rows, and preset with the apriori knowledge of AGC pixel row image maximum and minimum amplitudes
  • Figure 32 shows typical A to D codes applied to an arbitrary signal
  • the result of this step is a three bit (eight level) representation of pixel voltage This value represents the intensity of incident light, relative to local conditions The net effect of this relative thresholding is to flatten out any slowly varying image intensity envelope across the patch The digitized image, now normalized, is ready for output to the ABR function.
  • the A to Ds produce a three bit value for each pixel row
  • the sensor pixel clock operates at 20MHz
  • the sensor outputs 2400 bits (800 rows of three-bit values) every 50nS. ⁇ 200 bit wide bus running at 240MHz, couples the sensor IC to the ABR IC of Figure 28
  • Each output buffer is assigned to four pixel rowb, with each pixel row producing three bits per pixel clock cycle At each pixel clock cycle, the output buffer streams out the twelve bits generated in time to be ready for the next local vector While this scheme is realizable with current technology, advances in multilevel logic could result in a significant reduction in the bandwidth required
  • the Sensor includes a central control logic block whose function is to generate clocking for image charge transfer; provide reset signals to the preamplifiers, A to D converters and peak detectors, actuate the AGC row selection, and enable the data output stream
  • Figure 33 depicts the conceptual signal flow on the Sensor IC
  • the control block is driven with a 240MHz master clock, the fastest in the system This clock is divided to generate the three phases required to accomplish image charge transfer in the CCD
  • the reset and control pulses which cyclically coordinate operation of the pieamphfier with charge transfer operations and the A to D, are derived from the charge transfer phases and are synchronized with the master clock.
  • the output buffer control operates at the full master clock rate (to meet throughput requirements), and is sequenced to output the twelve local bits prior to the next pixel clock cycle.
  • Figure 33 shows the major timing elements of the sensor control.
  • the three CCD phases work together to increment charge packets across the imaging array a column at a time. When the third phase goes low, charge is input to the preamplifier.
  • the preamplifier reset is de- asserted just prior to third phase going low so it can process the incoming charge. Also just prior to the third phase going low, and concurrent with the pre-amp reset, the A to D converters are reset, zeroed and set to sensing mode.
  • the principal elements of the ORAM data correction electronics is illustrated in Figure 34 and shows and alignment and bit retrieval IC 32 receiving raw data from the sensor IC 27,29.
  • the IC 32 electronics include FAST SRAM, alignment circuitry, bit retrieval circuitry, and EDAC circuitry.
  • ABR alignment and bit retrieval
  • Step 5 is a series of convolutions performed on the zone fiducial image to yield the zone's "in- phase” and “quadrature” terms in the "x" direction (hence the designations I and Q).
  • Step 6 least squares fit (LSF), combines the I and Q values to form a line whose slope and intercept yield the "x" axis offset and symbol separation distance. Similar steps yield the "y” axis information. Use of the resultant "x” and "y” information predicts the exact locations of every symbol in the zone.
  • the next two operations are signal enhancement processing steps to improve the system signal-to-noise ratio (SNR).
  • SNR system signal-to-noise ratio
  • pulse slimming reduces the potential for intersymbol interference (ISI) caused by neighboring symbols and interpolation accommodates for the possibility of several adjacent pixels sharing symbol information.
  • ISI intersymbol interference
  • bit decisions can be made by simply evaluating the MSB (most significant bit) of symbol amplitude representation (step 8). This is the binary decision process step converting image information (with amplitude profiles and spatial aberrations) into discrete digital bits.
  • the error detection and correction (EDAC) function step 9) removes any residual errors resulting from media defects, contamination, noise or processing errors.
  • FIG. 34 shows in more detail a block diagram of the ABR IC 32.
  • the diagram portrays a powerful, special purpose compute engine.
  • the architecture of this device is specifically designed to store two-dimensional data and execute the specific ORAM algorithms to rapidly convert raw sensor signals to end user data.
  • This embodiment of ABR IC 32 includes an SRAM 91, micro controller and stored program 92, adder 94, accumulator 95, comparator 96, temporary storage 97, TLU 98, hardware multiplier 99, and SIT processor 100. Additionally, an output RAM buffer 102 and EDAC 103 are provided in this preferred embodiment.
  • Sensor data is read into fast RAM 91 in a process administered by autonomous address generation and control circuitry.
  • the image corners are coarsely located by the micro controller ( ⁇ C) 92 and the approximate corner symbol pixel location for the zone of interest is found. Exact location of the reference pixel is found by successively running a correlation kernel described above; a specialized 8 word adder 94 with fast accumulator 95 and a comparator 96 to speed these computations.
  • Detailed zone image attributes are determined by processing the image fiducial This involves many convolutions with two different kernels These are again facilitated by the 8 word adder and fast accumulator Results of these operations are combined by multiplication, expedited by hardware resources. Divisions are performed by the micro controller ( ⁇ C) 92
  • the arc tangent function can be accomplished by table look up (TLU) 98
  • the zone's image offset and rotation are known precisely. This knowledge is used to derive addresses (offset from the corner symbol origin) which describe the symbol locations in the RAM memory space
  • These offsets are input to the shmming-mterpolator (SIT) 100, which makes a one or a zero bit decisions and delivers the results to an output RAM buffer 102 where the EDAC 103 function is performed
  • Image data is sequentially read from the Sensor IC to a RAM buffer on the ABR IC
  • This buffer stores the data while it is being processed
  • the buffer is large enough to hold an entire image, quantized to three bits
  • the image data columns are sequenced off the Sensor, they are stored in memory, organized into stripes or segments 1 n illustrated in Figure 35
  • the width of these stripes are optimized depending on the technology selected for ABR IC implementation
  • the estimated stripe width is 40 cells, therefore 20 stripes are required (the product of these two numbers being 800, equal to the pixel width of the Sensor image area) This choice leads to a 2 ⁇ Sec latency between image data readout and the commencement of processing 4.2.1.4.
  • the design includes a dedicated hardware adder whose function is to sum 8 three-bit words in a single step.
  • an 8 x 8 convolutional mask becomes an 8 step process compared to a 64 step process if the operation were completely serial
  • the input to the adder is the memory output bus, and its output is a 6 bit word (wide enough to accommodate the instance where all eight words equal 7, giving the result of 56)
  • the six bit word has a maximum value of 64 (2 b ) which more than accommodates the worst case Convolutions in the current algorithm are two dimensional and the parallel adder is one dimensional
  • successive outputs of the adder must themselves be summed This is done in the accumulator At the beginning of a convolution, the accumulator is cleared As proper memory locations are accessed under control of the ⁇ Controller, the result of the adder is summed into the accumulator holding register This summation can be either an addition or subtraction, depending on the convolution kernel coefficient values
  • the comparator function is employed where digital peak detection is required, (e.g , when the corner symbol reference pixel is being resolved )
  • a convolution kernel matching the zone corner symbol pattern is swept (two dimensionally) across a region guaranteed large enough to include the corner pixel location The size of this region is dictated by the accuracy of the coarse alignment algorithm
  • Each kernel iteration result ( Figure 36) tests whether the current result is greater than the stored result If the new result is less than the stored value, it is discarded and the kernel is applied to the next location If the new result is greater than the stored result, it replaces the stored result, along with its corresponding address In this fashion, the largest convolution, and therefore the best match (and its associated address), is accumulated This address is the (x, y) location of the zone's corner reference pixel 4.2.1.5.
  • the alignment algorithms utilize a least squares fit to a series of points to determine magnification and rotation
  • the least squares operation involves many multiplies
  • Many multipliers are available (i.e , pipe-lined, bit serial, ⁇ Controlled, Wallace Tree etc.) This implementation uses a Wallace Tree structure.
  • the fundamental requirement is that the multiplier produce a 12 bit result from two 8 bit inputs within one cycle time
  • a Table Look Up (TLU) operation is used to perform this step, saving (iterative) computational time as well as IC surface area required for circuits dedicated to a computed solution
  • TLU Table Look Up
  • the interpolation and slimming (SIT) processor is a digital filter through which raw image memory data is passed.
  • the SIT circuit is presented with data one row at a time, and operates on five rows at a time (the current row and the two rows above and below it)
  • the circuit tracks the distance (both x and y) from the zone origin (as defined by the corner reference pixel )
  • Knowledge of the distance in "pixel space" coupled with derived alignment parameters yields accurate symbol locations within this set of coordinates
  • Figure 37 shows a portion of zone image mapped into memory
  • the interpolation and pulse slimming are signal processing steps to improve signal to-noise ratio (SNR)
  • SNR signal to-noise ratio
  • Pulse Slimming estimates the portion of the total energy on a central symbol caused by light "spilling" over from adjacent symbols due to intersymbol interference The process subtracts this estimated value from the total energy reducing the effect of ISI
  • the algorithms in the current embodiment subtract, from every symbol value, a fraction of the total energy from adjacent symbols Interpolation is used to define the pixel position closest to the true center of the symbol image Because the Sensor array spatially oversamples the symbol image (4 pixels per average symbol), energy from any single symbol is shared by several pixels. The most accurate measure of the actual symbol energy is obtained by determining the percentage of the symbol image imaged onto each of the pixels in its neighborhood, and summing this energy.
  • the input to the interpolation and slimming processor is a cascaded series of image data rows, and their neighbors. By looking at the data in each row, with knowledge of calculated symbol location, decisions and calculations about the actual energy in each symbol are made A final residual value establishes the basis for a 1 or 0 decision
  • the "Eye Diagram" for a system describes the probability of drawing the correct conclusions about the presence of absence of data Due to the equalization effected by the AGC function, the maximum amplitude envelope should be fairly flat across the image. The most likely source of ripple will be from the MTF of the symbol shape across the pixels.
  • the output of the SIT block is simple bits. For (approximately) every two rows of image pixel data, 64 bits will be extracted.
  • each zone contains 4096 data bits (64 x 64), represented by approximately 19000 (138 x 138) pixels on the sensor, depending on exact magnification
  • Each zone is approximately 138 x 138 pixels with 3 amplitude bits each, or about 57K bits, while it is being stored as image data.
  • these simple bits are passed along to the output buffer RAM where they are, in effect, re-compressed This image ultimately yields 4096 bits of binary data, about 14 to 1
  • the output buffer ( Figure 39) stores the results of the SIT processor It is a small RAM, 8192 bits, twice the size of a zone's worth of data. As bits are extracted from the zone, they are placed in the first half of this buffer. Once the zone decode is complete (and the first half of the buffer is full of new data from the zone), the EDAC engine begins to operate on it 4.2.1.9. EDAC ENGINE
  • Error Detection and Correction is performed by a conventional Reed- Solomon decoder well known in the state of the art.
  • This block of circuitry starts and stops the operations which perform zone location (coarse and fine), as well as the alignment, symbol image processing and correction.
  • the ⁇ Controller does not perform difficult arithmetic operations such as SIT, for which separate dedicated modules are available.
  • the Sensor IC delivers one complete row of pixel data (quantized to three bits), every 50nS or, at a rate of 20MHz.
  • AGC is performed real time with peak detection circuitry, as the image is being read out to RAM and thus does not add to the total data access time. 3. All memory accesses and simple mathematical operations occur at a lOOMHz (lOnS) clock rate.
  • a hardware Multiply resource is available, with a propagation time of lOnS.
  • Physical Image offset is ⁇ 15 pixels in all orthogonal directions.
  • Readout 93 ⁇ S ⁇ Image magnification Tolerances dictate a sensor plane with 800 x 800 pixels. Therefore, the average image falls -50 pixels from the readout edge.
  • True Corner 2.9 ⁇ S Coarse alignment locates the image to within a Location region of 6 x 6 pixels. Assuming that a hardware adder is available to sum 8 three bit values simultaneously, each pass through the corner kernel can be done in 4 memory operations. Because there is an "accumulate and compare" associated with these accesses, this number is doubled to 8 (per kernel pass). There are 36 locations to evaluate with the kernel so it takes (4*2*36*10nS) 2.9 ⁇ S. component 5.7 ⁇ S The I and Q sums each require O. ⁇ S (1.6 ⁇ S Alignment total), assuming a hardware adder. This comes Parameter from 10 points x 8 accesses per point x lOnS per access.
  • the RAM storing the Sensor image data must be fast enough to handle the cycle times imposed by this. Analysis indicates this rate is 200 parallel bits every 4.2nS. The segmented RAM design facilitates this by keeping row lengths short.
  • Critical paths include CMOS logic which propagates at about 200pS (200E-12 seconds) per gate delay, and the toggle rates on flip-flops that exceed 500MHz. By using sufficient parallelism in logic design, the timing constraints discussed below are easily met. 4.2.2.4. REQUIRED ⁇ CONTROLLER CYCLE TIMES
  • the ORAM ⁇ Controller cycles at greater than lOOMHz Hardware acceleration of additions, multiplies, and comparisons need to operate at this cycle time.
  • any local storage as well as the RAM is selected to be able to support this timing
  • AGC Automatic gain control
  • ADC analog to digital converters
  • the "true" zone location information is the coordinate pair defining the pixel location closest to the center of the symbol (or collection of symbols) comprising the zone's corner reference
  • the corner reference of a zone is the point from which all other symbols in a zone are referenced by the bit retrieval algorithm.
  • a corner symbol locating algorithm is used.
  • the current embodiment performs a local convolution in a small area surrounding the coarse zone location.
  • the convolution uses a convolving kernel that approximates a matched filter to the corner reference pattern.
  • the area of convolution is equal to the area of the kernel plus nine pixels in both the row and column directions and is centered on the coordinates found in the coarse corner location process.
  • Alignment is the process of determining the positions of the image symbols relative to the fixed pixel positions on the CCD array.
  • any set of functions ( ⁇ a , C0s(jf), ⁇ X + ⁇ , etc.) might be used to describe this relationship, as long as the function provides an accurate approximation of the symbol positions.
  • the relationship between the symbol positions and the pixel positions is described using polynomials.
  • a first order polynomial accurately locates the symbols providing there is a constant magnification over a zone.
  • a second order polynomial can locate the symbols providing there is a linear change in the magnification over a zone (1st order distortion). Higher order polynomials can be used to account for higher order distortions over the zone.
  • the alignment process becomes the process of determining the alignment parameter values.
  • the alignment algorithm determines each zone's alignment parameters by processing embedded alignment patterns (fiducials) bordering that zone.
  • the fiducials are uniformly spaced arrays of symbols.
  • the fiducials are interpreted as a two dimensional periodic signal.
  • the above described and currently preferred embodiment uses a sensor grid somewhat larger than the page (patch) image
  • another approach might allow for a sensor grid smaller than the image page which is then stepped across or scanned across the projected data image
  • the AGC and alignment fiducials are distinct from the changeable data, but alternatively it is possible to use the data portion of the signal in addition to or as the fiducials for driving the AGC circuitry Basically the data could be encoded in such a manner as to ensure a certain amount of energy in a particular spatial frequency range
  • a low pass and band pass or high pass filter could be used to drive the AGC process
  • the output of the low pass filter would estimate the dc offset of the signal and the output from the band pass or high pass filter would determine the level of gain (to be centered
  • Another embodiment of generating the alignment data is to have a series of marks (or a collection of marks) making up the fiducial These marks include alignment marks (fiducials) that are interspersed in a regular or irregular manner throughout the data
  • the alignment polynomial could then be determined by finding the position of each mark and plotting it against the known spatial relationship between the marks
  • the least squared error method could then be used to generate the best fit polynomial to the relationship between the known positions and the measured positions

Abstract

A system is disclosed for retrieving data from an optical image containing two-dimensional data patterns imaged onto a sensor array. Data record is an optical data layer (19) capable of selectively altering light such as by changeable transmissivity, reflectivity, polarization, and/or phase. The sensor array (27) is a layer of charge coupled devices (CCDs) arrayed in a grid pattern generally conforming to the projected data page but preferably the sensor grid is somewhat larger than the imaged data. To compensate for various optical effects, including translational and rotational offsets, magnification and distortion of the data image as it is converted to electrical data by the sensor array, raw image data is sensed on a grid larger than the page image and then electronically processed in an alignement and bit retrieval circuit (30, 32) to determine the true data corrected for displacement, rotation, magnification, and distortion. The processed, corrected data is then output to memory or throughput to applications.

Description

ALIGNMENT METHOD AND APPARATUS FOR RETRIEVING INFORMATION FROM A TWO-DIMENSIONAL DATA ARRAY
1.0 BACKGROUND OF THE INVENTION
The invention concerns systems for optically storing and retrieving data stored as light altering characteriεticβ on an optical material and providing fast random access retrieval, and more particularly, to an alignment method and apparatus sensing an optical image of the data and converting same to electrical data signals.
Optical memories of the type having large amount* of digital data stored by light modifying characteristicβ of a film or thin layer of material and accessed by optical light addressing without mechanical movement have been proposed but have not resulted in wide spread commercial application. The interest in such optical recording and retrieval technology is due to its record density and faster retrieval of large amounts of data compared to that of existing electro-optical mechanisms such as optical discs, and magnetic storage such as tape and magnetic disc, all of which require relative motion of the storage medium.
For example, in the case of optical disc memories, it is necessary to spin the record and move a read head radially to retrieve the data, which is output in serial fashion. The serial accessing of data generally requires transfer to a buffer or solid state random access memory of a data processor in order to accommodate high speed data addressing and other data operations of modern computers. Other storage devices such as solid state ROM and RAM can provide the relatively high access speeds that are sought, but the cost, size, and heat dissipation of such devices when expanded to relatively large data capacities limit their applications.
Examples of efforts to provide the relatively large capacity storage and fast access of an optical memory of the type that is the subject of this invention are disclosed in the patent literature such as U.S. Patent 3,806,643 for PHOTOGRAPHIC RECORDS OF DIGITAL INFORMATION AND PLAYBACK SYSTEMS INCLUDING OPTICAL SCANNERS and U.S. Patent 3,885,094 for OPTICAL SCANNER, both by James T. Russell; U. S. Patent 3,898,005 for a HIGH DENSITY OPTICAL MEMORY MEANS EMPLOYING A MULTIPLE LENS ARRAY; U. S. Patent No 3,996,570 for OPTICAL MASS MEMORY, U S Patent No. 3,656,120 for READ¬ ONLY MEMORY, U. S Patent No 3,676,864 for OPTICAL MEMORY APPARATUS, U S Patent No 3,899,778 for MEANS EMPLOYING A MULTIPLE LENS ARRAY FOR READING FROM A HIGH DENSITY OPTICAL STORAGE; U S Patent No. 3,765,749 for OPTICAL MEMORY STORAGE AND RETRIEVAL SYSTEM, and U S. Patent No. 4,663,738 for HIGH DENSITY BLOCK ORIENTED SOLID STATE OPTICAL MEMORIES While some of these systems attempt to meet the above mentioned objectives of the present invention, they fall short in one or more respects
1.1 SUMMARY OF THE INVENTION In a system for storing and retrieving data from an optical image containing two dimensional data patterns imaged onto a sensor array for readout, method and apparatus provided for detecting and compensating for various optical effects including translational and rotational offsets, magnification, and distortion of the data image as it is converted to electrical data by the sensor array Data may be stored for example in an optical data layer capable of selectively altering light such as by changeable transmissivity, reflectivity, polarization, and/or phase In one embodiment using a transmissive data layer, data bits are stored as transparent spots or cells on a thin layer of material and are illuminated by controllable light sources to project an optically enlarged data image onto an array of sensors Data is organized into a plurality of regions or patches (sometimes called pages) Selective illumination of each data page and its projection onto the sensor array accesses the data page by page from a layer storing many pages, e g , of a chapter or book The present invention may be used in optical memory systems described in U.S Patent No 5,379,266, Patent No. 5,541,888, international application nos PCT/US92/11356, PCT/US95/04602, PCT/US95/08078, and PCT/US95/08079; and copending U.S Application SN 08/256,202, which are fully incorporated herein by reference The sensor array may be provided by a layer of charge coupled devices (CCDs) arrayed in a grid pattern generally conforming to the projected data page but preferably the sensor grid is somewhat larger than the imaged data The data image generates charge signals that are outputted into data bucket registers underlying photosensitive elements Alternatively, other output sensor arrays may be employed, including an array of photosensitive diodes, such as PIN type diodes
Systems of the above type and other devices in which optical data are written or displayed as two-dimensional data patterns in the form of arrays of cells, symbols or spots, require a process or logical algorithm, implemented m hardware and/or software, to process signal values from sensor elements in order to locate and decode the data In general, there will not be a direct correspondence between a sensor element or cell and a binary "zero" or "one" value Rather most data encoding techniques will result in a local pattern of sensor cell values corresponding to some portion of an encoded bit stream In all but the least dense codes, each sensor cell value must be interpreted in the context of the neighboring cell values in order to be translated to one or more bit values of the encoded data The specific embodiment described below is referring to On Off Keyed (OOK) encoded data A simple example could use a transparent spot in the data film layer to represent a "one" value, while an opaque spot would correspond to a "zero" value If the two-dimensional data array in question is a data pattern, optically projected onto a grid of an optical sensor (for example, a CCD camera), and the data pattern overlays and aligns to the sensor grid in a prescribed manner, there are five modes in which the data can be misregistered These misregistrations may occur singly, or in combination, and manifest themselves as X axis and Y axis displacement error
Focal (Z axis) error Rotational error about an origin Magnification error Distortion Focal (Z axis) misregistration can be minimized by careful optical and mechanical design as is done in the embodiment disclosed herein In addition to misregistrations, the imaged data may be contaminated by electrical noise, by optical resolution limits and by dust or surface contamination on the data media and/or optical sensor Although it is possible to compensate for linear misregistrations by mechanical methods such as sensor stage rotation, or mechanical (X-Y axis) translation, it is often not desirable to do so because of mechanical complexity, cost, and speed constraints Nonlinear misregistrations are considerably more difficult, if not impossible, to correct mechanically Similarly, it is usually not possible to compensate for random contamination by mechanical means alone, but such contamination can be substantially compensated for by use of known error correction codes (ECCs)
In accordance with the preferred embodiment of the present invention, raw image data is sensed on a grid larger than the page image and then electronically processed to determine the true data corrected for displacement, rotation, magnification and distortion The processed, corrected data is then output to memory or throughput to applications
In the preferred embodiment, the sensor structure is a two-dimensional array of larger area than the two-dimensional data image projected onto the sensor array, and the individual sensor elements are smaller and more numerous (I e , denser) than the data image symbols or spots in order to oversample the data image in both dimensions For example, two or more sensing elements are provided in both dimensions for each image spot or symbol representing data to be retrieved About four sensing elements are provided in the disclosed embodiment for each image spot, and intensity values sensed by the multiple sensor elements per spot are used in oversampling and correction for mtersymbol interference Each page or patch of data is further divided into zones surrounded by fiducials of known image patterns to assist in the alignment processes and gain control for variations of image intensity In carrying out these operations, the analog level sensed at each of the oversampling sensor elements is represented by a multibit digital value, rather than simply detecting a binary, yes or no illumination The preferred embodiment includes automatic gain control (AGC) of image intensity which is initiated outboard of data zones by using AGC skirts of known image patterns and AGC peak detection circuit processes to track the image intensity across the entire plane of each data zone The peak detection process and associated circuitry preferably uses a two-dimensional method that averages a baseline signal of amplitude along one axis and a linear interpolation of the peak detection amplitude along the other orthogonal axis
Additional features of the preferred embodiment include the provision of alignment fiducials containing embedded symbols of known patterns and positions relative to the zones of data symbol positions, and the fiducial patterns have predetermined regions of maximum light and dark image content which provide periodic update of the AGC processes summarized above Using these processes, a coarse alignment method determines the approximate corner locations of each of multiple zones of data and this is followed by a second step of the location procedure by processing corner location data to find a precise corner location Preferably, the precise or fine corner locating scheme uses a matched filter technique to establish an exact position of a reference pixel from which all data positions are then computed
Alignment of the data to correct for various errors in the imaging process in the preferred embodiment uses polynomials to mathematically describe the corrected data positions relative to a known grid of the sensor array These alignment processes, including the generation of polynomials, make use of m-phase and quadrature spatial reference signals to modulate to a baseband a spatial timing signal embedded in the alignment fiducial which is further processed through a low pass filter to remove the spatial noise from the timing signal In this manner, the combination of m-phase and quadrature spatial reference signals generates an amplitude independent measure of the timing signal phase as a function of position along the fiducial To generate the polynomials that determine the correct alignment of data based on the alignment fiducials, the preferred embodiment uses a least squares procedure to generate the best fit of a polynomial to the measured offsets The coefficients of the polynomials are then used to derive alignment parameters for calculating the displacement of data spot positions due to the various misalignment effects due to the optical, structural, and electrical imperfections As a feature of the preferred processing, second order polynomial fit information is employed to estimate the optical distortion of the image projected onto the sensor After alignment the recovered image information is further refined by using a two- dimensional pulse slimming process in the preferred embodiment to correct for two dimensional intersymbol interference
The sensor employs a broad channel detection architecture enabling data of exceptionally long word length to be outputted for use in downstream data processes
1.2 BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other features of the present invention will be more fully appreciated when considered in light of the following specification and drawings in which
Figure 1 is a block diagram of the ORAM system in accordance with the preferred embodiment
Figure 2 shows illustrations of data media at different magnifications to show the break down of the data hierarchy from a "chapter" into "patches" (also called pages), and a "patch" (page) into "zones" and "zones" into data symbols or spots
Figure 3 shows a portion of a data pattern portrayed as rotated, translated, and somewhat distorted with respect to the orthogonal sensor co-ordinates (three of the several forms of image defects which the method corrects)
Figure 4 is an illustration of a patch with an exploded view of a corner region containing a corner symbol, two AGC "skirts" and portions of two alignment fiducials
Figure 5 is a flow diagram overview of the sensor and alignment/bit retrieval process Figure 6 shows data patches before and after AGC
Figure 7 illustrates an image of a patch showing the two sets of AGC skirts
Figure 8 shows a comparison of possible paths for AGC analysis, when centered on the AGC skirt, the AGC process can analyze a known pattern
Figure 9 is a diagram of a sensor array with a patch image projected on it, showing how the sensor is divided into six sections for analysis
Figure 10 shows the process for finding the center of an AGC skirt Figure 11 is a diagram of how AGC normalizes intensity of the patch image, illustrating that in the readout direction, the A to D converter thresholds are set by the peak and valley detection circuitry, and in the lateral direction, linear interpolation is used to set the thresholds. Figure 12 is a diagram of a patch showing the regions of the patch associated with the three modes of AGC operation.
Figure 13 shows a section of sensor image highlighting a corner region, comer symbol, and spot or pixel in the corner used as an origin for referencing positions of nearby data symbols or spots.
Figure 14 shows the AGC skirts and corner symbols purposely aligned such that the row and column positions of the AGC skirt centers can be combined into coordinate pairs which become a coarse measurement of the corner symbol locations.
Figure 15 is a flow chart of the corner symbol convolution process. Figure 16 is a fragment of the data image at the sensor showing one of the zones with corresponding fiducials including corner symbols. Figure 17 is a flow chart of the data alignment process.
Figure 18 illustrates the placement of the filters on the alignment fiducials. Figure 19 shows the typical curve for phase in x-direction as a function of x (assuming no noise).
Figure 20 shows values for phase in x-direction as a function of x (including noise). Figure 21 shows values for phase in y-direction as a function of x (including noise).
Figure 22 shows linear (first order) fit to phase values. Figure 23 shows quadratic (second order) fit to phase values.
Figure 24 is a diagram illustrating the labeling of the four fiducials surrounding a zone. Figure 25 is an eye diagram showing the effects of noise, data spot interpolation and pulse slimming.
Figure 26 illustrates the relationship between symbol position on pixel array versus the weighting values used for interpolation. „
Figure 27 shows the 16 regions of symbol position on the pixel and the corresponding pixel weights used for interpolation
Figure 28 shows the ORAM electronics receiver subsystem including sensor integrated circuit (IC) Figure 29 shows relative pixel magnitude for single and grouped "ones"
Figure 30 is a functional block diagram of the sensor IC
Figure 31 shows an AGC skirt layout
Figure 32 shows A to D codes with respect to signal intensity
Figure 33 shows the signal flow on the sensoi IC of Figure 30 Figure 34 shows an alignment-bit-retπeval (ΛBR) IC block diagram
Figure 35 depicts the segmented memory design of the ABR IC
Figure 36 shows the 8 word adder and accumulator function
Figure 37 shows the zone in image memory
Figure 38 shows related diagrams illustrating the interpolation and pulse slimming technique
Figure 39 is a diagram of the output RAM buffer
Figure 40 is a timing diagram from request to data ready access
2.0 INTRODUCTION TO DETAILED DESCRIPTION
An image of a two-dimensional data array is formed on an optical aeιnor Scored digital data is to be recovered from this image A representative two-dimensional memory device to accomplish this data recovery is described in US Patent number 5,3 /9,266, "Optical Random Access Memory," (ORAM) and Figure 1 shows a functional block diagram of an ORAM system 10 suitable for disclosing the alignment method and apparatus of the present invention
In the embodiment of Figure 1, a record is made as indicated at 10a, in which user data is encoded combined with fiducials in data patterns called patches or pages that are written onro record media 19. More particularly, and as fully disclosed >n copending applications PCT/US92/11356 aid LSfSN 08/256,202, user data is enteied at 35, encoded/ECC at 3 j, whereupon data and fiducial patterns are generated 37, and written at 38 to media, such as an optical data layer capable of selectively alternating light in one or more of the above described ways The data layer 19 thus prepared is then fabricated at 39 in combination with a lens array 21 to form media/lens cartridge In this example, the image of a two-dimensional data field, as written by E-beam on a chromium coated quartz media substrate To retrieve the data from the record, the media, lens cartridge 17 is removably placed in an ORAM reader indicated at 10b and the data from each patch or page is selectively back-illuminated so as to be projected onto a sensor 27
An individual page or "patch" of data is back-illuminated when data in that patch is selected at 124 via a user data request provided at interface 23 as described in U S Patent No
5,379,266 More specifically, system controller 125, as described in the above-mentioned pending applications PCT/US92/11356 and SN 08/256,202, coordinates the operations of a read source 124, alignment/bit retrieval processor 32, and decode and ECC 127 A lens system focuses the image onto a sensor array 27 which converts light energy into an electrical signal As described more fully below, this signal is first sensed by analog circuitry, then converted to a digital representation of the image This digital image representation is stored in RAM 30 whereupon it is operated on by the retrieval algorithms processor indicated at 32 The digitized image is processed to correct for mechanical, electrical, and optical imperfections and impairments, then converted to data and ECC at 127, and the data presented to the user via user interface 123 In the representative ORAM 10, the symbols (or spots) making up the pages of the record are disclosed in this embodiment as bits of binary value, however, the invention is also useful for non-binary symbols or spots including grayscale, color, polarization or other changeable characteristics of the smallest changeable storage element in the record These available symbol locations or cells are placed on a 1 micron square grid Logical "ones" are represented by optically transparent 9 micron holes formed in an otherwise opaque surface, while "zeroes" are represented by regions that remain opaque (unwritten ) Symbols are grouped into "zones" of 69 by 69 symbol positions with 21 zones grouped to form a unit of data defined as a "Patch " . _
Multiple patches comprise the unit of data defined as a "Chapter " Chapters comprise the unit of data contained on a single removable data cartridge 17 Media layout architecture is depicted in Figure 2
Using the method described herein, there need be no predetermined, fixed registration, alignment, or magnification of the data array image with respect to the sensor pixel array The two requirements for the sensor array are (1) that it be somewhat larger in both X and Y dimensions than the image projected on it to allow for some misregistration without causing the data image to fall outside the active sensor region, and (2) that it have a pixel density in both the row and column dimension which is greater than the density of the projected symbol image so as to be sufficient to recover the data, and in this embodiment it is approximately twice the symbol count projected on it (The sensor hardware design providing this function is detailed in Section 4 1 ) The alignment method described in this disclosure will locate the image data array on the sensor, determine the position of each individual data symbol in the image relative to the known sensor grid, and determine the digital value of each bit A fundamental purpose of the herein disclosed alignment method and apparatus is to determine the spatial relationship between the projected image of the data array and the sensor array The grid of the sensor array is formed by known locations of the sensing cells or elements which are sometimes called pixels in the following description
Each zone is bounded on the corners by "corner symbols" and on the sides by alignment "fiducials " The function of the corner symbol is to establish an origin for analyzing the fiducials and calculating symbol positions The fiducial patterns themselves are used to calculate the "alignment parameters " This disclosure describes the method and apparatus for Steps 2 through 8, collectively called "alignment and bit retrieval" (ABR) Steps 1, 9, and 10 are included for completeness The logical functions associated with each step in Figure 5 are summarized on the following pages:
3.1. STEP 1: DATA REQUEST
A user request for data initiates an index search in RAM to determine the address of the patch(es) containing the desired data The light source serving this data address is illuminated, projecting an image of the desired data through the optical system and onto the sensor This image, projected on the sensor, is the input data for the alignment and bit retrieval apparatus
3.2. STEP2: READSENSORANDPERFORMAUTOMATIC GAINCONTROL(AGC)
The goal of the AGC process is to normalize the intensity profile of the patch image and to adjust the analog thresholds of the A/D conversion so as to efficiently spread the range of analog values associated with the modulation depth over the available levels of digital representation Figure 6 shows two images The image on the left is of a patch as detected before the AGC process The image on the right is of the same patch after AGC has been performed Automatic gain control (AGC) is the process of modifying the gain of the amplifiers which set the threshold values for the analog to digital converters (ADCs) The term "automatic" implies that the gain adjustment of the amplifier "automatically" tracks variations in the image intensity As image intensity increases, amplifier gain increases, and as image intensity decreases, amplifier gain decreases The effect of AGC is to provide a digital signal to the analyzing electronics which is approximately equivalent to the signal that would be derived from an image with a constant intensity profile over the entire sensor The closer the resulting normalized signal approximates a constant intensity profile, the lower the signal to noise ratio at which the device can operate without error AGC is necessary because image intensity may vary across the sensor due to many causes, including Variability in the illuminating light within the optical system,
Low spatial frequency variation in symbol transmittance or pixel sensitivity Amplifier gain is set based on the intensity read from predetermined "AGC regions" spaced throughout the data pattern There are two types of AGC regions a) AGC "Skirts" located on the perimeter of the data patch "AGC Skirts" are the first illuminated pixels encountered as the array is read out They provide an initial measure of intensity as image processing begins b) AGC "marks" located in the alignment fiducials along each side of each data zone AGC marks are used to update the amplifier gain as successive rows are read out from the sensor array. As pixel values (the value of the light image falling on a sensor element) are read from the sensor array, the AGC skirts are used both to predict the locations of the AGC regions on the image plane and to set the initial gain of the ADCs This is completed prior to processing the pixels corresponding to data symbol positions on the image Figure 7 depicts an entire patch of 21 data zones The data zones on the top and left edge of the patch have AGC skirts aligned with their respective fiducial regions There are two sets of AGC skirts, one along the top and one along the side Dual sets of skirts enable bi-directional processing of the image and provide reference points for estimating the positions of the Corner Symbols (discussed below.) The AGC process consists of three operations
Operation 1) Locating the AGC skirt
Operation 2) Determining the center of the AGC skirt regions Operation 3) Performing the AGC function Operations 1 and 2 constitute a spatial synchronization process directing the AGC circuitry to the AGC regions Synchronizing the AGC circuitry to the AGC regions allows gain control, independent of data structure see Figure 8 During Operations 1 and 2, the threshold values for the A to D converters are set with default values During Operation 3, the AGC process sets the A to D converter thresholds
The above three sections describe the three AGC operations in overview A more detailed description of each operation is included in Section 3.2 1 and following below 3.2.1. AGC OPERATION 1 - LOCATING THE AGC SKIRT
To find the AGC skirt, each row of the sensor is analyzed starting from the top edge Each pixel row is read in succession and divided into six separate sections for analysis (Figure 9)
The algorithm defines the AGC skirt to be located when a specified number of neighboring pixels display an amplitude above a default threshold In the current implementation, an AGC skirt is considered located when four out of five neighboring pixel values are higher than the threshold When all four skirts in Sections 2 through 5 (as shown in Figure 9) are located, AGC Operation 1 is finished
3.2.2 AGC OPERATION 2 - DETERMINING THE AGC SKIRT CENTER In AGC Operation 2, the last row of pixels processed in Operation 1 is further processed to find the specific pixel locations that are most central to the AGC skirts This operation involves processing the pixel values in the row with a series of combinatorial logic operations which first find the edges of the skirts and then iteratively move to the center When the center of each skirt in sections 2 through 5 is found, Operation 2 is finished Figure 10 depicts the process for finding the center pixel of an AGC skirt
3.2.3. AGC OPERATION 3-PERFORMINGTHEAGC FUNCTION
Once the column positions defined by the center pixel of each AGC skirt have been found, the intensity of the overall image is tracked by monitoring this column position The tracking is performed by peak and valley detection circuitry This tracking sets the threshold values for the A to D converters corresponding to the column of the pixel at the center of AGC skirts For those pixels falling between AGC skirt centers, threshold levels are set by a linear interpolation between the values of the AGC skirt centers on each side (Figure 11)
The AGC operation must accommodate the fact that the AGC skirts for sections 1 and 6 are encountered later in the readout of the sensor than those in sections 2-δ To deal with this, the AGC process is performed in three stages (see Figure 12) In the first stage, AGC skirts in sections 2-5 are located and their centers determined In stage 2, the AGC skirts in sections 1 and 6 are located and their centers found while the first three zones encountered (in sections 2 - 5) are undergoing intensity normalization In the third and final stage, the center of the AGC skirts in all sections have been located, and the entire width of the sensor undergoes intensity normalization as each row of the sensor is read out
3.3. STEP 3: PERFORM COARSE CORNER LOCATION
The corner locating algorithm is performed in two steps a) Coarse Corner Location (defines a region in which the reference pixel (origin) will be found ) b) True Corner Location (exactly selects the reference pixel.) The above two steps, in combination, function to locate all the corner symbols for the entire patch Each Corner Symbol acts as a reference point for analyzing the fiducial patterns The location of a reference point (sensor pixel location, point (Rc.Cc) in Figure 13) also acts as an origin from which all displacement computations are made within that zone Four corner symbols are associated with each zone, but only one of the four is defined as the origin for that zone In the current embodiment, the zone's upper left corner symbol is used
In subsequent processing, alignment parameters are used to calculate the displacement of each symbol position from the zone origin Dividing the corner location process into two subprocesses (coarse corner location and true corner location) minimizes processing time The coarse corner location process is a fast, computationally inexpensive, method of finding corner locations withm a few pixels. The true corner location process then locates the reference pixel of the corner symbol with greater precision Using the coarse corner location process to narrow the search, minimizes the computational overhead required
Coarse Corner Location
The coarse corner location involves locating the column positions of the AGC skirt centers at the top of the patch, and the row positions of the AGC skirts on the side of the patch These coordinates in the 'row' and 'column' directions combine to give the coarse corner locations (see Figure 13 and Figure 15)
3.4. STEP 4: PERFORM TRUE CORNER LOCATION (REFERENCE PIXEL) FOR EACH ZONE Locating the true corner position and, more particularly, the reference pixel (origin) for a zone, requires a spatial filtering operation The spatial filter is a binary approximation to a matched filter which is "matched" to the shape of the corner symbol The filter is an array of values with finite extent in two dimensions, which is mathematically convolved with the image data in the regions identified by the "coarse corner location" process as containing the reference pixel origin
The reference pixel origin Rc, Cc (see Figure 13) is the pixel location on the sensor array where convolution with the spatial filter yields a maximum value The convolution process in the flow chart of Figure 15 is carried out in process steps 50-69 as shown
Once the reference pixel coordinates are established, each fiducial region is processed and the alignment parameters for each zone Z1 21 are determined
3.5 STEP 5: CALCULATE ALIGNMENT PARAMETERS FOR EACH ZONE 3.5.1. THE ALIGNMENT ALGORITHM
The alignment algorithm determines the alignment parameters for each zone Z1 21 by processing patterns embedded in the fiducials bordering that zone The fiducials contain regions of uniformly spaced symbol patterns These regions provide a two-dimensional, periodic signal
The alignment algorithm measures the phase of this signal in both the row and column directions at several points along the fiducial A polynomial is fit to the set of phase values obtained at these points using a "least squares" analysis The polynomial coefficients obtained in the least squares process are then used to determine the alignment parameters As seen in Figures 16 and 24, four fiducials t, b, r, 1 are associated with every zone (one on each of four sides) Depending on the image quality, any combination from one to four fiducials could be used to calculate alignment parameters for the zone. The described embodiment uses all four. Using fewer reduces processing overhead with some corresponding reduction in accuracy.
The general flow of the alignment algorithm is shown by processing steps 71-76 in Figure 17. To the right of each process step is a short description of it's purpose.
3.5.2. APPLYING A SPATIAL FILTER TO THE FIDUCIAL SIGNAL
The first step in determining the alignment parameters involves a spatial filtering process. The periodic signal resulting from the periodic symbol patterns in the fiducial, is multiplied by a reference signal to generate a difference signal. This is done twice with two reference signals such that the two resulting difference signals are in phase quadrature. The signals are then filtered to suppress sum frequencies, harmonic content, and noise.
The filtering process involves summing pixel values from a region on the fiducial. The pixel values summed are first weighted by values in a manner that corresponds to multiplying the fiducial signal by the reference signals. In this way, the multiplication and filtering operations are combined. The filter is defined by the extent of the pixel region summed, and multiplication by a reference signal is accomplished by weighting the pixel values. Figure 18 illustrates this combined multiplication and filtering process for each of the x and y components.
3.5.3. DETERMINING THE ALIGNMENT FIDUCIAL SIGNAL PHASE
The next step is to take the arc tangent of the ratio of quadrature to in-phase component. The result is the signal phase.
The In phase component is defined:
A - COS(2JI - P(X) + $) (3.1)
where P(x ) is the x dependent part of the phaso The Quadrature component is defined:
Λ-sin(2π-P(x) + φ) (3.2)
Dividing the Quadrature bv the ln-phasc component removes the amplitude dependence:
, / \ v A-sin(23i'P(x) + ώ) sin(2π-P(x) + φ) tan(2π />(*) +φ) ) ) ( Y ) v. [ τ{ (3.3)
V V ' Y/ Λ'Cθs(2π-P(x)-r$) cαs(2π /»(*) + φ)
The phase of the signal can now be determined by taking the arctangent:
phase -2π.-P(x) + <b~ tan~'(tan((2π />(x) + φ))
(3.4)
A convenient way of describing the alignment is to plot the phase of the fiducial signal as a function of position. Figure 19 shows an example of phase plots for the signal in the row and column directions. Some noise will be present in any actual phase measurements. Figures 20 and 21 are examples of typical x and y direction phase plots. To approximate the phase curve from the measured data, a polynomial is used to describe the curve. The coefficients of the polynomial are estimated using a least squares analysis.
3.5.4. PERFORMING A LEAST SQUARES FIT TO THE DATA The first step in performing the least squared error fit is to choose the order of the curve used to fit the data. Two examples, first order and second order polynomial curve fits, are represented in Figure 22 and Figure 23. Figures 22 and 23 illustrate fitting first and second order curves to the phase data. While other functions could be used to fit the data, the preferred process uses polynomials which simplifies the least squares calculations for derivation of the coefficients.
The least squares error fit involves deriving the coefficients of the polynomial terms
Derivation of the Alignment Parameters for the first order (linear) least squares fit
Given: phase - Φ - ax + b (3 5)
(where a and b are the coefficients from the linear least squares fit)
And
"» - 2(/o + y>) (3.6) (where x is the position of the "mth" symbol)
Solving (3.6) above for x yields
Which can be rewritten as: x - x0 + m dx (3.8)
Where f0 , , 1
*o ~ - _ —~ ,. aanndd a dxx -- — -^-j- (χ 0 and άx arc Defined as the X-axis Alignment /, 2/,
Parameters) From Eq. 3.8 it can be seen that, using the alignment parameters, the position of any symbol (x) can be calculated
A similar derivation for a second order polynomial fit is described below.
Derivation of Alignment Parameters using a second order (quadratic) fit:
Given: phase - Φ - ax2 + bx + c (3.9)
And using the relationship:
'" - 2(/o + fιx + fιχ2 ) (3.10)
Where:
Solving Eq. 3.10 for x, (the position of the "mύY ' bit:)
Which can be rewritten as: x — x0 + m dx + m2 • ddx (3.12)
,I a ""nd" d "d«x-» - Λ J/
If the second order term is small compared to the first order term, these parameters can be approximated as: xo - 0 , dx , and ddx - Λ
/. 2/, Af -
( X-axis Alignment Parameters from a 2nd order fit) 3.5.5. COMBINING ALIGNMENT PARAMETERS FROM FOUR FIDUCIALS
Each of the four alignment fiducials bordering a zone (Figure 24) are analyzed and for each fiducial, a separate phase curve is generated for its x and y components. The curves are generated using the filtering processes shown in Figure 18. The vertical fiducials are processed in equivalent manner with the appropriate coordinated transformation.
The coefficients for each polynomial fit are converted to alignment parameters. Eight sets of alignment parameters are generated. The eight sets of alignment parameters are designated using a "t" for top fiducial, "b" for bottom fiducial, "r" for right fiducial, and "1" for left fiducial.
The following is an example of alignment parameters derived from a quadratic least squares fit:
Top Fiducial (t): l- xo> '_ ^* . aod t ddx (row) t_ y0' (__ dy . and t_ ddy (column)
Bottom Fiducial (b): ^- •*o • °_dx , and b_ddx (row) b- )O > °_dy , and b ddy (column)
Right Fiducial (r): r_ *o ■ r- dx > "od r_ ddx (row) r_ y0 , r_dy , and r_ ddy (column)
Left Fiducial (1): '-. ■*o . l_dx , and l_ ddx (row) '_ JO » l-dy, and l_ddy (column)
3.6. STEP 6: CALCULATE SYMBOL POSITIONS
These alignment parameters are combined to specify the location of the symbol in the mth row and the nth column with respect to the origin. st order curve fit
(. dx -(69 - m) + b dx -(m)) (I dx (69 - n) + r dx (n))
X.s, - l- Xo + « — - ^ = 5-t- + «| . i-= i -± = \JL (3.14) ov 69
Y > «_dy (69 - ,,,) + b_dy(m)) . (l_dy(69 - n)+ r_dy(n))
69 . 69
nd order curve fit
λ r « , I X. u , ,
(t_dy(69 - m)+ b _dy(m)) , (t ddy (69 - #π)+ b_ddy(m))
I- ~ '-X' +" 6^ " " 69 + (3.17)
(l_dy(69 -n)+ r_dy(n)) , (l_ddy(69 -n) + r_ddy (n)) 69 " "' 69
It is noted that the value "69" occurs in equations 54 - 57 because, in the herein described implementation , the zones are 69 symbols wide, and therefore, the fiducials are 69 symbols apart.
3.7 STEP 7: PERFORM INTERPOLATION AND PULSE SLIMMING
Next, the pixel values associated with data symbols (as opposed to fiducial symbols,) are further processed by interpolation and pulse slimming to reduce the signal noise due to intersymbol interference (ISI).
ISI refers to the image degradation resulting from the image of one symbol position overlapping that of its nearest neighbors. ISI increases the signal to noise ratio (SNR) required for proper bit detection. ISI is encountered in one-dimensional encoding schemes in which the symbol size in the recording direction (e.g., along the "linear" track of a magnetic tape or an optical disk,) is greater than the symbol-to-symbol spacing. This linear ISI is analyzed effectively with an "eye diagram." The fact that ORAM data is closepacked in both the x and y directions creates potential for overlap, not only from neighboring symbols on either side of the symbol in question, but also from symbols located immediately above and below, and to a lesser extent, on the diagonals Despite this complication, the one-dimensional "eye diagram" analog still illustrates the processes involved (see Figure 25)
The "eye" is the region of values where there is no combination of symbol patterns that can overlap in such a way as to produce a value at that location It is in the eye region that the threshold value is set to differentiate between the presence of a symbol and the absence of a symbol Ideally, to decide whether or not a symbol is present, the threshold value is set to the value halfway between the upper and lower boundaries of the eye diagram (Figure 25a)
Noise added to the signal has the effect of making the edges of the eye somewhat "fuzzy" The term "fuzzy" is used here to describe the statistical aspect of noise that changes the actual amplitude of the signal One can think of noise as reducing the size of the eye (Figure 25b)
When the effects of offset between the center of a symbol image and the center of a pixel are combined with the presence of noise and a threshold that is above or below the mid point of the eye, errors will be made in bit detection (Figuie 25b) To counter this effect, interpolation and pulse slimming are used
Interpolation:
The alignment algorithm has the accuracy to position the center of a symbol image with at least the precision of ± 1/4 pixel Interpolation is invoked to account for the variation in energy distribution of a symbol image across the pixels (Figure 2δc) This variation is due to the variable location of the symbol image relative to the exact center of the pixel If a symbol is centered over a single pixel, the majority of the energy associated with that symbol will be found in that pixel If the center of the symbol falls between pixels, the energy associated with that symbol will be distributed between multiple pixels (Figure 26)
To obtain a measure of the energy associated with a symbol image for all possible alignments of symbol centers, a weighted summation of a 3 x 3 array of pixels is used as a measurement of the symbol energy The 9 pixels in the array are chosen such that the calculated true symbol center lies somewhere within the central pixel of the 3 x 3 array This central pixel location is subdivided into 16 regions, and depending on in which region the symbol is centered, a predetermined weighting is used in summing up the 3 x 3 array. Figure 27 shows the location of the 16 regions on a pixel and their nine corresponding weighting patterns.
The four weights ("0", ".25", ".5", and "1") are chosen in this embodiment to minimize binary calculation complexity. (Each of these weights can be implemented by applying simple bit shifts to the pixel values.) In general, other weighting strategies could be used.
Pulse Slimming:
The steps of pulse slimming estimate the influence of neighboring symbols and subtracts the signal contribution due to their overlap from the signal read from the current sensor pixel being processed. It is an important feature of the preferred embodiment to perform pulse slimming after interpolation, that is after the data are corrected for pixel position with reference to the sensor grid. Pulse slimming reduces the effect of the overlap thereby increasing the size of the "eye" (see Figure 25d).
One method of assessing the effect of neighboring symbols is to estimate their position and subtract a fraction of the pixel value at these estimated neighboring positions from the value at the current pixel under study. One implementation subtracts one eighth of the sum of the pixel values two pixels above, below, and on each side of each pixel in the zone being processed.
Mathematically this can be written:
Pixel{x, y) - Pirrtjr.y) ( 'M'. J' - Φ Pi∞l(x. y + 2)+ Pixel(x - 2, y) + Pixel(x + 2, y)) o
3.8 STEP 8: PERFORM RETRIEVAL THRESHOLD DECISION
Finally, following sequential execution of each of the above modules in the ABR process, a 1 or 0 decision for each potential symbol location is made by comparing the magnitude of the processed symbol value (after pulse slimming and interpolation) to a threshold. If the corrected pixel value is below the threshold (low light), a "zero" is detected. If the corrected value is above the threshold value (high light), a "one" is detected. 3.9. STEP 9: PERFORM ADDITIONAL ERROR DETECTION AND CORRECTION (EDAC)
In addition to the alignment and bit retrieval of the present invention, known error detection and correction processes may be employed For a suitable ORAM error correction design see Chow, Christopher Matthew, An
Optimized Singly Extended Reed Solomon Decoding Algorithm, Master of Science Thesis, Department of Electrical Engineering, University of Illinois, 1996
4. APPARATUSFOR HARDWARE IMPLEMENTATIONOFTHEMETHOD:
The method described above is the software implementation of the invention However, the currently preferred embodiment implements the process in specific hardware (logic implemented in circuits) and firmware (microcode) to achieve speed goals and other advantages This preferred implementation is depicted in Figure 28, "ORAM electronics receiver subsystem", and separates the hardware implementation into two functional integrated circuits (ICs)
Image Sensing and Digitizing (Sensor IC) The sensor IC of Figure 28 combines sensor 27 and image digitizer 29 and converts photonic energy (light) into an electronic signal (an analog process) The sensor IC 27 includes an array 27a of sensing elements (pixels) arranged in a planar grid placed at the focal plane of the data image and senses light incident on each element or pixel The accumulated pixel charges are sequentially shifted to the edge of pixel array and preamphfied In the preferred embodiment, the analog voltage level at each pixel is digitized with three bits (eight levels) of resolution This accumulated digital representation of the image is then passed to the ABR IC which combines the functions of RAM 30 and the alignment/bit retrieval algorithm shown in Figure 1 Data Alignment and Bit Retrieval (ABR IC)
The ABR IC of Figure 28 is a logical module or integrated circuit which is purely digital in nature The function of this module is to mathematically correct the rotation, magnification, and offset errors in the data image in an algorithmic manner (taking advantage of embedded features in the data image called fiducials) Once the image has been aligned, data is extracted by examining the amplitude profiles at each projected symbol location Random access memory (RAM) 30 which in this embodiment is in the form of a fast SRAM holds the digitized data image from the sensor IC, and specific processing performs the numerical operations and processes described herein for image alignment and data bit retrieval
IMAGE SENSING AND DIGITIZING IC (THE SENSOR IC) 4.1.1. PHOTON DETECTION
Sensor IC is made up of silicon light sensing elements Photons incident on silicon strike a crystal lattice creating electron-hole pairs These positive and negative charges separate from one another and collect at the termini of the field region producing a detectable packet of accumulated charge The charge level profile produced is a representation of light intensity profiles (the data image) on the two-dimensional sensor plane
The sensor plane is a grid of distinct (and regular) sensing cells called pixels which integrate the generated charge into spatially organized samples Figure 29 shows, graphically, how the light intensity of the image (shown as three-dimensional profiles) affects the pixel signal magnitude Pixel signal magnitude is a single valued number representative of the integrated image intensity (energy) profile over the pixel These relative values are shown as the numbers within each pixel in Figure 29
The intensity representations of Figure 29 assume a certain registration between the location of the "Is" (high intensity spot) and the pixel grid array Take, for example, the solitary "1" in the left hand diagram of Figure 29 If the "1" bit were not centered over a single pixel, but instead, centered over the intersection of four neighboring pixels, a different symmetry would appear There would be four equally illuminated pixels (forming a 2 x 2 square) surrounded by a ring of lesser illuminated pixels This example assumes that the image of a single data symbol covers approximately four (2 x 2) pixels The nominal system magnification is 20 to 1 (+/-10%), resulting in the a lμ diameter symbol on the media being projected onto a 2 x 2 array of lOμ pixels, on the sensor Magnification errors, however, can change the relative pixel values slightly As magnification exceeds 20 to 1, each symbol will be spread across more than 2 x 2 pixels and for image magnifications less than 20 to 1, symbol energy will be shared by less than 2 x 2 pixels Note that this approximation ignores the higher order effects of the fringes of the symbol image (resulting from the point spread function of the optics)
Magnification and registration tolerances and guardband define the required sensor array dimensions The sensor 27 (Figure 28) must be large enough to contain the complete image in the event of maximum magnification (specified in this example to be 22 to 1) and worst case registration error (specified to be less than +/ lOOμ, in both the x and y direction) Since the data patch on the media is 354 x 354 lμ spaced symbols, the patch image on the sensor can be as large as 7788μ Adding double the maximum allowable offset (200μ,) to allow for either positive or negative offset, requires the sensing array to be at least 7988μ wide, or 799-10μ pixels
Therefore, in the described embodiment, the Sensor IC design specifies an 800 x 800 pixel array
4.1.2. PREAMPLIFICATION DESIGN CONSIDERATIONS
By executing repetitive device cycles, signal charge is sequentially transported to the edge of the active sensor where a preamplifier 80 converts signal charge to a voltage sufficient to operate typical processing circuitry here provided by digitizer and logic 29 followed by output buffers 82 The sensor IC architecture (Figure 30) specifies a preamplifier 80 for each row of pixels Since entire columns of data are read out with each charge couple device (CCD) cycle (one pixel per row across all 800 rows), the CCD operating frequency is a key parameter determining system performance In the simplest implementation, a standard full frame imager is used The CCD clock operates at 10 Mhz Designing output circuitry for every pixel row multiplies the per cycle throughput of a standard full frame imager by the number of rows In the preferred embodiment, this has the effect of increasing system performance by a factor of 800 System noise is predominately a function of preamplifier design, therefore, careful attention is paid to the design and construction of the preamplifier Important preamplifier parameters are gain, bandwidth and input capacitance Gain must produce sufficient output signal relative to noise, however, gain-bandwidth tradeoffs are inevitable, and gain must be moderated to achieve sufficient speed Input capacitance must be kept low to maximize charge- to-voltage conversion and minimize input referred noise charge The sensor preamplifier 80 is a common source FET input configuration Associated resetting circuitry of standard design may be used and should be simple, small, and low noise
Suitable preamplifier designs are known and selected to meet the following specifications Preamp Performance
A = 100 μVolts/electron
BW(3dB) = 55MHz
Input referred noise = 50 electrons
4.1.3. DIGITIZATION - AUTOMATIC GAIN CONTROL Prior to digitizing the image, a sampling of pixel amplitude is used to establish thresholding of the A to D converter If the threshold selected is too high, all image symbol values fall into the first few counts of the A to D and resolution is lost If the threshold selected is too low, the A to D saturates, distorting the output Image intensity is a function of location across the zone, patch, and chapter, therefore, any thresholding algorithm must accommodate regional variation
The automatic gain control (AGC) scheme maximizes system performance by maximizing the dynamic range of image digitization, enhancing system accuracy and speed The image amplitude (intensity) is monitored at predetermined points (AGC skirts) and this information is used to control the threshold levels of the A to D converters As image readout begins, the signal is primarily background noise, because by design, the image is aimed at the center of the sensor 27 and readout begins at the edge, which should be dark As the CCD cycles proceed and successive columns are shifted toward the sensing edge, the first signal encountered is from the image of the leading edge of the AGC skirt (see Figure 31) The AGC skirt image is a 5 x 9 array of all "ones" and therefore transmits maximal light The amplitude read from pixels imaging these features represents the maximum intensity expected anywhere on the full surface At each pixel row a logic block in digitizer and logic 29 (see Figure 30) is designed to detect these peak value locations and under simple control, select the pixel row most closely aligned to the AGC features
Along the same pixel rows as the AGC skirt, in the fiducial rows, are precoded portions of the image which represent local "darkness", I e , a minimum value (all "0") and local "brightness", i.e., a maximum value (all bits are "1 ") These row values are monitored by peak detection circuitry as the pixel columns are read out Peak detectors (see Figure 33 discussed below) are known per se and a decision-based peak detector used here stores the highest value encountered Its counterpart, the minimum detector, is identical in structure but with the comparator sense reversed
The difference between the maximum and minimum signals represents the total A to D range, and accordingly sets the weight for each count The value of the minimum signal represents the DC offset (or background light) present in the image This offset is added to the A to D threshold These threshold values are shared across the image (vertically with respect to Figure 31) to achieve linear interpolation in value between AGC samples
4.1.4. DIGITIZATION - QUANTIZATION For processing, the captured image is digitized and passed to the alignment/bit retrieval
(ABR) algorithms The sensor IC 27,29 including CCDs performs the digitization following preamphfication The ORAM embodiment described herein utilizes three bits (eight levels) of quantization indicated in Figure 32.
With reference to Figure 33, each preamplifier 80 output feeds directly into an A to D block, so there is an A to D per pixel row The design here uses seven comparators with switched capacitor offset correction Thresholds for these comparators are fed from a current source which forces an array of voltages across a series of resistors. The value of the thresholds are controlled by a network of resistors common to all pixel rows, and preset with the apriori knowledge of AGC pixel row image maximum and minimum amplitudes Figure 32 shows typical A to D codes applied to an arbitrary signal
The result of this step is a three bit (eight level) representation of pixel voltage This value represents the intensity of incident light, relative to local conditions The net effect of this relative thresholding is to flatten out any slowly varying image intensity envelope across the patch The digitized image, now normalized, is ready for output to the ABR function.
4.1.5. DATA OUTPUT
At the end of each pixel clock cycle, the A to Ds produce a three bit value for each pixel row There are 800 pixel rows on the sensor detector plane and the sensor pixel clock operates at 20MHz At 20 MHz, the sensor outputs 2400 bits (800 rows of three-bit values) every 50nS. Λ 200 bit wide bus running at 240MHz, couples the sensor IC to the ABR IC of Figure 28
The organization of this bus structure maximizes speed while minimizing silicon surface area and power dissipation of the chip Each output buffer is assigned to four pixel rowb, with each pixel row producing three bits per pixel clock cycle At each pixel clock cycle, the output buffer streams out the twelve bits generated in time to be ready for the next local vector While this scheme is realizable with current technology, advances in multilevel logic could result in a significant reduction in the bandwidth required
4.1.6. SENSOR IC CONTROL To manage the required functions, the Sensor includes a central control logic block whose function is to generate clocking for image charge transfer; provide reset signals to the preamplifiers, A to D converters and peak detectors, actuate the AGC row selection, and enable the data output stream Figure 33 depicts the conceptual signal flow on the Sensor IC
The control block is driven with a 240MHz master clock, the fastest in the system This clock is divided to generate the three phases required to accomplish image charge transfer in the CCD The reset and control pulses which cyclically coordinate operation of the pieamphfier with charge transfer operations and the A to D, are derived from the charge transfer phases and are synchronized with the master clock. The output buffer control operates at the full master clock rate (to meet throughput requirements), and is sequenced to output the twelve local bits prior to the next pixel clock cycle. Figure 33 shows the major timing elements of the sensor control. The three CCD phases work together to increment charge packets across the imaging array a column at a time. When the third phase goes low, charge is input to the preamplifier. The preamplifier reset is de- asserted just prior to third phase going low so it can process the incoming charge. Also just prior to the third phase going low, and concurrent with the pre-amp reset, the A to D converters are reset, zeroed and set to sensing mode.
4.2. DATA ALIGNMENT AND BIT RETRIEVAL (ABR) IC
The principal elements of the ORAM data correction electronics is illustrated in Figure 34 and shows and alignment and bit retrieval IC 32 receiving raw data from the sensor IC 27,29. The IC 32 electronics include FAST SRAM, alignment circuitry, bit retrieval circuitry, and EDAC circuitry.
4.2.1. ABR IC FUNCTIONAL DESCRIPTION 4.2.1.1. FUNCTIONAL FLOW
The alignment and bit retrieval (ABR) process steps are shown in the flow chart of Figure 5. Image information is captured and quantized on the sensor IC (steps 1-2). This data is then streamed via high speed data bus to the ABR IC to fill an on board data buffer (step 2). A routine, "coarse corner location," proceeds which orients memory pointers to approximately locate the image (step 3). With coarse corner location complete, the more exact "true corner location" is performed (step 4.) Steps 5, 6, 7 and 8 are mathematically intensive operations to determine the precise zone offset, rotation and magnification parameters used in bit decoding. Step 5, is a series of convolutions performed on the zone fiducial image to yield the zone's "in- phase" and "quadrature" terms in the "x" direction (hence the designations I and Q). Step 6, least squares fit (LSF), combines the I and Q values to form a line whose slope and intercept yield the "x" axis offset and symbol separation distance. Similar steps yield the "y" axis information. Use of the resultant "x" and "y" information predicts the exact locations of every symbol in the zone. The next two operations are signal enhancement processing steps to improve the system signal-to-noise ratio (SNR). In step 7, pulse slimming reduces the potential for intersymbol interference (ISI) caused by neighboring symbols and interpolation accommodates for the possibility of several adjacent pixels sharing symbol information.
With the image processed through steps 1 through 7 above, bit decisions can be made by simply evaluating the MSB (most significant bit) of symbol amplitude representation (step 8). This is the binary decision process step converting image information (with amplitude profiles and spatial aberrations) into discrete digital bits. Once data is in bits, the error detection and correction (EDAC) function (step 9) removes any residual errors resulting from media defects, contamination, noise or processing errors.
4.2.1.2. BLOCK LEVEL DESCRIPTION Figure 34 shows in more detail a block diagram of the ABR IC 32. The diagram portrays a powerful, special purpose compute engine. The architecture of this device is specifically designed to store two-dimensional data and execute the specific ORAM algorithms to rapidly convert raw sensor signals to end user data. This embodiment of ABR IC 32 includes an SRAM 91, micro controller and stored program 92, adder 94, accumulator 95, comparator 96, temporary storage 97, TLU 98, hardware multiplier 99, and SIT processor 100. Additionally, an output RAM buffer 102 and EDAC 103 are provided in this preferred embodiment.
Sensor data is read into fast RAM 91 in a process administered by autonomous address generation and control circuitry. The image corners are coarsely located by the micro controller (μC) 92 and the approximate corner symbol pixel location for the zone of interest is found. Exact location of the reference pixel is found by successively running a correlation kernel described above; a specialized 8 word adder 94 with fast accumulator 95 and a comparator 96 to speed these computations. Detailed zone image attributes are determined by processing the image fiducial This involves many convolutions with two different kernels These are again facilitated by the 8 word adder and fast accumulator Results of these operations are combined by multiplication, expedited by hardware resources. Divisions are performed by the micro controller (μC) 92 The arc tangent function can be accomplished by table look up (TLU) 98
At this stage, the zone's image offset and rotation are known precisely. This knowledge is used to derive addresses (offset from the corner symbol origin) which describe the symbol locations in the RAM memory space These offsets are input to the shmming-mterpolator (SIT) 100, which makes a one or a zero bit decisions and delivers the results to an output RAM buffer 102 where the EDAC 103 function is performed
4.2.1.3. RAM AND SENSOR INTERFACE
Image data is sequentially read from the Sensor IC to a RAM buffer on the ABR IC This buffer stores the data while it is being processed The buffer is large enough to hold an entire image, quantized to three bits A Sensor size of 800 x 800 pixels, quantized to three bits per pixel, requires 1 92 million bits of storage
Assuming a 20MHz Sensor line clock, loading the entire Sensor image to RAM takes 40μSec To support throughput and access time requirements, it is necessary to begin processing the image data prior to the image being fully loaded The RAM buffer, therefore, has dual port characteristics To achieve dual port operation without increased RAM cell size, the buffer is segmented as shown in Figure 35
As the image data columns are sequenced off the Sensor, they are stored in memory, organized into stripes or segments 1 n illustrated in Figure 35 The width of these stripes (and therefore the number of them) are optimized depending on the technology selected for ABR IC implementation For the current embodiment, the estimated stripe width is 40 cells, therefore 20 stripes are required (the product of these two numbers being 800, equal to the pixel width of the Sensor image area) This choice leads to a 2 μSec latency between image data readout and the commencement of processing 4.2.1.4. PARALLEL ADDER, ACCUMULATOR AND COMPARATOR
Many of the alignment operations are matrix convolutions with a pre- specified kernel These operations involve summing groups of pixel amplitudes with coefficients of ±1 To expedite these operations, the design includes a dedicated hardware adder whose function is to sum 8 three-bit words in a single step. For example, an 8 x 8 convolutional mask becomes an 8 step process compared to a 64 step process if the operation were completely serial The input to the adder is the memory output bus, and its output is a 6 bit word (wide enough to accommodate the instance where all eight words equal 7, giving the result of 56) The six bit word has a maximum value of 64 (2b ) which more than accommodates the worst case Convolutions in the current algorithm are two dimensional and the parallel adder is one dimensional To achieve two dimensionality, successive outputs of the adder must themselves be summed This is done in the accumulator At the beginning of a convolution, the accumulator is cleared As proper memory locations are accessed under control of the μController, the result of the adder is summed into the accumulator holding register This summation can be either an addition or subtraction, depending on the convolution kernel coefficient values
The comparator function is employed where digital peak detection is required, (e.g , when the corner symbol reference pixel is being resolved ) In this operation, a convolution kernel matching the zone corner symbol pattern is swept (two dimensionally) across a region guaranteed large enough to include the corner pixel location The size of this region is dictated by the accuracy of the coarse alignment algorithm Each kernel iteration result (Figure 36) tests whether the current result is greater than the stored result If the new result is less than the stored value, it is discarded and the kernel is applied to the next location If the new result is greater than the stored result, it replaces the stored result, along with its corresponding address In this fashion, the largest convolution, and therefore the best match (and its associated address), is accumulated This address is the (x, y) location of the zone's corner reference pixel 4.2.1.5. HARDWARE MULTIPLY
The alignment algorithms utilize a least squares fit to a series of points to determine magnification and rotation The least squares operation involves many multiplies To reduce their impact on access time, a dedicated multiplier is required Many multipliers are available (i.e , pipe-lined, bit serial, μControlled, Wallace Tree etc.) This implementation uses a Wallace Tree structure. The fundamental requirement is that the multiplier produce a 12 bit result from two 8 bit inputs within one cycle time
4.2.1.6. ARC TANGENT FUNCTION
Resolving the angle represented by the quotients of the Alignment Parameters, (1 e , x0) and (y0), transforms the results of the least squares fit operation into physically meaningful numbers (such as magnitude, rotation in terms memory addresses) Quotients are used as input to this function since they are inherently dimensionless, that is, amplitude variation has been normalized out of them
A Table Look Up (TLU) operation is used to perform this step, saving (iterative) computational time as well as IC surface area required for circuits dedicated to a computed solution A table size of 256 ten-bit numbers (2560 bits) supports resolution of angles up to 0 35° The table's 256 points need only describe a single quadrant (the signs of quotient operands determine which quadrant)
4.2.1.7. SIT PROCESSOR AND BIT DECISION In a linear fit example, four Alignment Parameters, x0, dx, y0 and dy, describe the results of coarse and true corner location, alignment calculations and trigonometric operations These parameters represent the x and y offset from the corner symbol origin, of the first data symbol, with a resolution of 1/4 pixel The parameters, dx and dy, represent the distance between symbols, in units of memory locations It is important to note that these quantities have more precision than obtained by simply specifying an address These parameters are able to locate a symbol anywhere in a zone to within +, 1/4 pixel Stated another way, these numbers are accurate to within 1 part in 608 (69 symbols in a zone at a magnification of 2 2 implies that the zone spans 152 pixels, to be accurate within 1/4 pixel implies being accurate to within 1 part in 152*4 or 608) Therefore, alignment parameters must be at least 9 bit numbers since this is the smallest 2n value capable of providing accuracy greater than 1 part in 608 To account for quantization noise and to prevent deleterious effects from finite precision mathematics, the current baseline for these parameters is 12 bits of precision
The interpolation and slimming (SIT) processor is a digital filter through which raw image memory data is passed The SIT circuit is presented with data one row at a time, and operates on five rows at a time (the current row and the two rows above and below it) The circuit tracks the distance (both x and y) from the zone origin (as defined by the corner reference pixel ) Knowledge of the distance in "pixel space" coupled with derived alignment parameters yields accurate symbol locations within this set of coordinates
Figure 37 shows a portion of zone image mapped into memory Once the alignment routines establish the exact zone origin, the data location is known Moving away from the origin, three symbol positions down and three symbol positions left (correspondingly, approximately six pixels down and six pixels left, depending on the exact magnification), the memory area of the zone containing data is reached Once in this area, rows of image data are passed to the SIT circuit in order (from top to bottom), to operate on one at a time, with knowledge of the neighborhood
The interpolation and pulse slimming are signal processing steps to improve signal to-noise ratio (SNR) Figure 38 summarizes the operations for both techniques For more detail on pulse slimming refer to section 3 7
Pulse Slimming estimates the portion of the total energy on a central symbol caused by light "spilling" over from adjacent symbols due to intersymbol interference The process subtracts this estimated value from the total energy reducing the effect of ISI The algorithms in the current embodiment subtract, from every symbol value, a fraction of the total energy from adjacent symbols Interpolation is used to define the pixel position closest to the true center of the symbol image Because the Sensor array spatially oversamples the symbol image (4 pixels per average symbol), energy from any single symbol is shared by several pixels. The most accurate measure of the actual symbol energy is obtained by determining the percentage of the symbol image imaged onto each of the pixels in its neighborhood, and summing this energy. For a more comprehensive overview of the interpolation and pulse slimming algorithms, see Section 3.7.
The input to the interpolation and slimming processor (SIT) is a cascaded series of image data rows, and their neighbors. By looking at the data in each row, with knowledge of calculated symbol location, decisions and calculations about the actual energy in each symbol are made A final residual value establishes the basis for a 1 or 0 decision In communications theory, the "Eye Diagram" for a system describes the probability of drawing the correct conclusions about the presence of absence of data Due to the equalization effected by the AGC function, the maximum amplitude envelope should be fairly flat across the image. The most likely source of ripple will be from the MTF of the symbol shape across the pixels. The output of the SIT block is simple bits. For (approximately) every two rows of image pixel data, 64 bits will be extracted. In the recorded media, each zone contains 4096 data bits (64 x 64), represented by approximately 19000 (138 x 138) pixels on the sensor, depending on exact magnification Each zone is approximately 138 x 138 pixels with 3 amplitude bits each, or about 57K bits, while it is being stored as image data. On readout, these simple bits are passed along to the output buffer RAM where they are, in effect, re-compressed This image ultimately yields 4096 bits of binary data, about 14 to 1
4.2.1.8. OUTPUT RAM BUFFER
The output buffer (Figure 39) stores the results of the SIT processor It is a small RAM, 8192 bits, twice the size of a zone's worth of data. As bits are extracted from the zone, they are placed in the first half of this buffer. Once the zone decode is complete (and the first half of the buffer is full of new data from the zone), the EDAC engine begins to operate on it 4.2.1.9. EDAC ENGINE
Error Detection and Correction (EDAC) is performed by a conventional Reed- Solomon decoder well known in the state of the art.
4.2.1.10. μCONTROLLER Executive control of the ABR process is managed by the μController (Figure 34).
This block of circuitry starts and stops the operations which perform zone location (coarse and fine), as well as the alignment, symbol image processing and correction. With the exception of the divide operation (part of the least squares fit operation, performed during image parameter extraction), the μController does not perform difficult arithmetic operations such as SIT, for which separate dedicated modules are available.
4.2.2. CRITICAL ABR IC PERFORMANCE REQUIREMENTS 4.2.2.1. DATA ACCESS TIME BUDGET
What follows is a breakdown of the ORAM data access time specification and it forms the basis for requirements placed upon the ABR IC components. The steps in the data access process are listed, followed by some global assumptions as well as analysis or rationale for the timing associated with each step.
1. Integration (Image acquisition)
2. Readout to RAM (Concurrent with AGC)
3. Coarse image location 4. True Corner (reference pixel) location
5. Y-axis Phase and Quadrature sums, Tan ' operation and "unwrap" to straight line of points
6. LSF yielding Yo and dY
7. X-axis Phase and Quadrature sums, arc tangent operation, and "unwrap" to straight line of points
8. LSF yielding Xo and dX 9. Interpolation
10. Pulse slimming
11. Thresholding
12. Error Correction
Global Assumptions:
1. The Sensor IC delivers one complete row of pixel data (quantized to three bits), every 50nS or, at a rate of 20MHz.
2. AGC is performed real time with peak detection circuitry, as the image is being read out to RAM and thus does not add to the total data access time. 3. All memory accesses and simple mathematical operations occur at a lOOMHz (lOnS) clock rate.
4. A hardware Multiply resource is available, with a propagation time of lOnS.
5. The physical data image extents = 354 symbols x 354 symbols. (Nominally, then, with 2 x 2 pixels per symbol,) the pixel extents = 708 x 708 pixels. 6. Image magnification: Spec = 20 ± 2.
7. Physical Image offset (uncertainty) is ± 15 pixels in all orthogonal directions.
Access Time Components: irPrbcess Step Contribu- Analysis tion (μsec) f∑ Integration j 20μS A tyjncaT epec for current sensor devices
Readout 93μS~ Image magnification Tolerances dictate a sensor plane with 800 x 800 pixels. Therefore, the average image falls -50 pixels from the readout edge. The nominal zone image is 138 x 138 pixels, therefore acquisition of the first full zone requires (50+138)/20E6, = 9.4 μsec. However, only the first 12 rows containing fiducial data must be read before zone alignment processing can begin, therefore only (50+12)/20E6 = 5 μsec is required before further processing can proceed.
Coarse Corner 2.0μS Because the AGC features and a "signal valid" Location indicator identify the image edge, coarse horizontal location of the image (in the direction of readout) is determined in real time, with no impact to access time. In the perpendicular direction, the edge will he coarsely found by sequentially accessing inward across memory using the parallelism of the memory. Covering the uncertainty of 72 pixels, with the (assumed) 8 pixels available simultaneously, requires 9 access operations. Sampled twice to increase the certainty of measurement, requires 18*10nS which is rounded up to 2μS.
True Corner 2.9μS Coarse alignment locates the image to within a Location region of 6 x 6 pixels. Assuming that a hardware adder is available to sum 8 three bit values simultaneously, each pass through the corner kernel can be done in 4 memory operations. Because there is an "accumulate and compare" associated with these accesses, this number is doubled to 8 (per kernel pass). There are 36 locations to evaluate with the kernel so it takes (4*2*36*10nS) 2.9μS. component 5.7μS The I and Q sums each require O.βμS (1.6μS Alignment total), assuming a hardware adder. This comes Parameter from 10 points x 8 accesses per point x lOnS per access. Each kernel sum is a 9 bit number (because 80 3 bit numbers are summed together), dividing these requires (30 operations x 10 quotients x lOnS) = 3μS. Table look-up of 10 numbers to determine their implied angle requires O.lμS. the LSQF is estimated at 100 operations (lmS) assuming the existence of a high speed HW multiplier. The sum of these component contributions yields 5.7μS. w X Component AP TTμT Similar to Y component (above), with lμS added to convert the S3 and S4 results to pixel (RAM) space numbers.
Items 3, 4, 5, 6, 7 and 8 are summed together to form the alignment result of lδ.δμS shown as the "align" contribution to overall timing in the diagram of Figure 40.
4.2.2.2. RAMANDDATAINPUTSPEEDS
The RAM storing the Sensor image data must be fast enough to handle the cycle times imposed by this. Analysis indicates this rate is 200 parallel bits every 4.2nS. The segmented RAM design facilitates this by keeping row lengths short.
4.2.2.3. LOGIC PROPAGATION SPEEDS
Critical paths include CMOS logic which propagates at about 200pS (200E-12 seconds) per gate delay, and the toggle rates on flip-flops that exceed 500MHz. By using sufficient parallelism in logic design, the timing constraints discussed below are easily met. 4.2.2.4. REQUIRED μCONTROLLER CYCLE TIMES
The ORAM μController cycles at greater than lOOMHz Hardware acceleration of additions, multiplies, and comparisons need to operate at this cycle time. In addition, any local storage as well as the RAM is selected to be able to support this timing
5. APPENDIX GLOSSARY OF TERMS
GLOSSARY OF KEY ALIGNMENT AND BIT RETRIEVAL TERMS:
AGC
Automatic gain control (AGC) is the process of modifying the gain of the amplifiers that set the threshold values for the analog to digital converters (ADCs) The term "automatic" indicates that the gam adjustment of the threshold setting amplifier "automatically" tracks variations in the image intensity As the image intensity increases, amplifier gain increases accordingly increasing the threshold. As the image intensity decreases, the thresholding amplifier gain decreases The effect of the AGC is to provide a signal to the analyzing electronics which is approximately equivalent to a signal derived from an image with an intensity profile that was constant over the entire CCD array (charge coupled device) The better the resulting signal approximates one from a constant intensity profile the better the AGC
Coarse Zone Location
The information required for coarse zone location is the coordinate values for the uppei left hand corner of each zone Coarse alignment is the process of obtaining these coordinates This alignment is termed "coarse" because the coordinate values are determined with an accuracy of ± 4 pixels
True Zone Location
The "true" zone location information is the coordinate pair defining the pixel location closest to the center of the symbol (or collection of symbols) comprising the zone's corner reference The corner reference of a zone is the point from which all other symbols in a zone are referenced by the bit retrieval algorithm. To find the true zone location, a corner symbol locating algorithm is used. The current embodiment performs a local convolution in a small area surrounding the coarse zone location. The convolution uses a convolving kernel that approximates a matched filter to the corner reference pattern. The area of convolution is equal to the area of the kernel plus nine pixels in both the row and column directions and is centered on the coordinates found in the coarse corner location process.
Alignment and Alignment Parameters
Alignment is the process of determining the positions of the image symbols relative to the fixed pixel positions on the CCD array. In theory, any set of functions (χa, C0s(jf), \X + \ , etc.) might be used to describe this relationship, as long as the function provides an accurate approximation of the symbol positions. In the alignment and retrieval algorithms current embodiment, the relationship between the symbol positions and the pixel positions is described using polynomials. A first order polynomial accurately locates the symbols providing there is a constant magnification over a zone. A second order polynomial can locate the symbols providing there is a linear change in the magnification over a zone (1st order distortion). Higher order polynomials can be used to account for higher order distortions over the zone. By representing the relationship between symbols and pixels with a polynomial, the alignment process becomes the process of determining the alignment parameter values.
Alignment Algorithm The alignment algorithm determines each zone's alignment parameters by processing embedded alignment patterns (fiducials) bordering that zone. The fiducials are uniformly spaced arrays of symbols. The fiducials are interpreted as a two dimensional periodic signal.
While only particular embodiments have been disclosed herein, it will be readily apparent to persons skilled in the art that numerous changes and modifications can be made thereto, including the use of equivalent means, devices, and method steps without departing from the spirit of the invention For example, the above described and currently preferred embodiment uses a sensor grid somewhat larger than the page (patch) image Alternatively, another approach might allow for a sensor grid smaller than the image page which is then stepped across or scanned across the projected data image In the above currently preferred embodiment, the AGC and alignment fiducials are distinct from the changeable data, but alternatively it is possible to use the data portion of the signal in addition to or as the fiducials for driving the AGC circuitry Basically the data could be encoded in such a manner as to ensure a certain amount of energy in a particular spatial frequency range Then a low pass and band pass or high pass filter could be used to drive the AGC process The output of the low pass filter would estimate the dc offset of the signal and the output from the band pass or high pass filter would determine the level of gain (to be centered about the dc offset)
Another embodiment of generating the alignment data is to have a series of marks (or a collection of marks) making up the fiducial These marks include alignment marks (fiducials) that are interspersed in a regular or irregular manner throughout the data The alignment polynomial could then be determined by finding the position of each mark and plotting it against the known spatial relationship between the marks The least squared error method could then be used to generate the best fit polynomial to the relationship between the known positions and the measured positions

Claims

We claim
1 In a system for retrieving data from an optical image containing a two-dimensional data pattern imaged onto sensors for readout, comprising a sensor having an array of light to electrical sensing elements in a two-dimensional grid pattern for sensing data spots in a data pattern imaged thereon, said array of sensing elements having a density greater than that of the data spots in the data pattern so as to oversample the data spots in two dimensions, optical retrieval fiducials with said data pattern imaged on said sensor, and data retrieval processor for said sensor determining amplitudes and locations of imaged data spots and producing amplitude and position corrected data from said sensor
2 In the system for retrieving data from an optical image of claim 1, wherein said optical retrieval fiducials include AGC and alignment fiducials, and wherein said data retrieval processor comprises AGC and alignment processing and includes a polynomial subprocessor for generating corrected data positions relative to said array of sensing elements in said grid pattern
3 In the system for retrieving data from an optical image of claim 2, wherein certain of said alignment fiducials cause spatial timing signals to be produced by said polynomial subprocessor, and said further including in phase and quadrature spatial reference signals to modulate said spatial timing signals associated with said alignment fiducials in said imaged data pattern for generating said true data spot positions
4 In the system for retrieving data from an optical image of claim 3, further comprising in said alignment processing a low pass filter for removing spatial noise from said spatial timing signals
5 In the system for retrieving data from an optical image of claim 1, wherein said optical retrieval fiducials contain AGC attributes, and said data retrieval processor further comprising
AGC subprocessor for automatic gain control of the sensing of data spots due to variation of intensity across said image
6 In the system for retrieving data from an optical image of claim 5, wherem said AGC subprocessor includes AGC peak detection circuitry for tracking image spot intensity across predetermined areas of said imaged data pattern
7 In the system for retrieving data from an optical image of claim 6, wherein said peak detection circuitry includes a two dimensional signal processing that averages a baseline peak detection amplitude along one axis of the two-dimensional data pattern and interpolates between peak detection amplitude along the other orthogonal axis of the data pattern
8 In the system for retrieving data from an optical image of claim 2, wherein said polynomial subprocessor of said alignment processing includes a least squares subprocessor to generate a best-fit of a polynomial to determine said corrected data positions relative to said array of sensing elements in said grid pattern
9 In the system for retrieving data from an optical image of claim 2, wherein said polynomial subprocessor of said alignment processing includes process steps of computing coefficients of polynomials and adopting said coefficients to derive alignment parameters that in turn generate said corrected data positions, whereby at least certain misalignment effects due to optical, structural and electrical imperfections are substantially corrected
10 In the system for retrieving data from an optical image of claim 1 , wherein said sensor grid pattern spans a larger area than an area of the image containing data that is to be retrieved
11 In a system for retrieving data stored on a removable optical media and by causing an optical image thereof to be projected onto sensors for readout, in which the image contains a two- dimensional data pattern including associated retrieval fiducials imaged onto sensors for readout, comprising a sensor having light to electrical sensing elements arrayed in a two-dimensional pattern for sensing data in a light data pattern imaged thereon, said arrayed two-dimensional pattern of sensing elements constructed and arranged so as to oversample imaged data in two dimensions, a retrieval processor for said sensor responding to said retrieval fiducials for determining corrected amplitude and position of imaged data, whereby the imaging of data on the sensor elements is corrected for variation in image intensity and alignment
12 In the system for retrieving data as set forth in claim 11 , wherein the retrieval fiducials included with said two-dimensional data pattern contain position alignment fiducials, and wherein said retrieval processor comprises position alignment processing
13 In the system for retrieving data as set forth in claim 11, wherein the retrieval fiducials in said two-dimensional data pattern contain AGC fiducials, and wherem said retrieval processor comprises AGC processing
14 In the system for retrieving data as set forth in claim 11, wherein said retrieval processor includes a pulse slimming subprocess to correct sensed data corrupted by signal interference between sensor elements
15 In the system for retrieving data as set forth in claim 11, wherein said retrieval processor includes a two-dimensional pulse slimming subprocessor to minimize errors introduced by inter¬ symbol interference
16 In a system for retrieving data from an optical image containing a two-dimensional data pattern having known optical retrieval fiducials imaged onto a sensor for readout and compensating for various optical effects including translational and rotational errors of the data image as it is converted to data, comprising a sensor array provided by light sensing elements arranged in a two-dimensional grid pattern generally conforming to an imaged data pattern, said light sensing elements being constructed and arranged with a density greater than data in said image data pattern so as to oversample the data image in both dimensions, sense level circuitry for said sensor elements producing for each element a multibit digital value representing an encoded optical characteristic sensed at each sensing element, and automatic gain control (AGC) for detecting image intensity across said pattern in response to said retrieval fiducials with said optical image
17 In the system of claim 16, further comprising a two-dimensional pulse slimming processor to correct for two-dimensional inter-symbol interference
18 In the system of claim 16, further comprising parallel readout and processing enabling data words of length determined by the number of data spots in each dimension of the data image to be outputted for controlling downstream data processes
19 In a system for retrieving data from an optical image containing an electro optically selected two-dimensional data pattern having retrieval fiducials imaged onto a sensor array for readout and for compensating for various optical effects including translational and rotational offsets and magnification of the data image as it is converted to electrical data and wherein each selected data pattern is divided into multiple zones, each zone having retrieval fiducials of known image characteristics including zone corners to assist in the retrieval process, comprising a sensor array provided by a layer of light sensing elements arrayed in a two-dimensional grid pattern generally conforming to the imaged data pattern, said sensor elements being constructed and arranged to oversample the data image in both dimensions, coarse alignment processor that determines approximate zone corner locations of each of said multiple zones of data; and fine corner locating processor for determimng a more exact position than said coarse alignment processor of a reference point in each said zone relative to which data positions are computed
20. In the system of claim 19, further comprising an alignment processor to generate corrections for position errors in the imaging process using polynomials to describe the corrected positions relative to known positions of said sensor elements
21. In the system of claim 20, said alignment processor further comprising a second order polynomial subprocessor for enhancing correction of image distortion due to optical effects.
EP97925517A 1996-05-10 1997-05-08 Alignment method and apparatus for retrieving information from a two-dimensional data array Withdrawn EP0979482A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US1750296P 1996-05-10 1996-05-10
US17502P 1996-05-10
PCT/US1997/007967 WO1997043730A1 (en) 1996-05-10 1997-05-08 Alignment method and apparatus for retrieving information from a two-dimensional data array

Publications (1)

Publication Number Publication Date
EP0979482A1 true EP0979482A1 (en) 2000-02-16

Family

ID=21782952

Family Applications (1)

Application Number Title Priority Date Filing Date
EP97925517A Withdrawn EP0979482A1 (en) 1996-05-10 1997-05-08 Alignment method and apparatus for retrieving information from a two-dimensional data array

Country Status (6)

Country Link
EP (1) EP0979482A1 (en)
JP (1) JP2000510974A (en)
CN (1) CN1220019A (en)
AU (1) AU712943B2 (en)
CA (1) CA2253610A1 (en)
WO (1) WO1997043730A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100676870B1 (en) * 2005-12-22 2007-02-02 주식회사 대우일렉트로닉스 Method for detecting optical information and optical information detector
US9122952B2 (en) * 2011-12-23 2015-09-01 Cognex Corporation Methods and apparatus for one-dimensional signal extraction
DE102012103495B8 (en) * 2012-03-29 2014-12-04 Sick Ag Optoelectronic device for measuring structure or object sizes and method for calibration
CN104331697B (en) * 2014-11-17 2017-11-10 山东大学 A kind of localization method of area-of-interest
EP3064902B1 (en) * 2015-03-06 2017-11-01 Hexagon Technology Center GmbH System for determining positions
CN112859563B (en) * 2016-07-24 2022-07-05 光场实验室公司 Calibration method for holographic energy-guided systems
CN110168963B (en) * 2017-01-18 2021-08-20 华为技术有限公司 Communication method and device
FR3069093B1 (en) * 2017-07-13 2020-06-19 Digifilm Corporation METHOD FOR BACKING UP DIGITAL DATA ON A PHOTOGRAPHIC MEDIUM

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5128528A (en) * 1990-10-15 1992-07-07 Dittler Brothers, Inc. Matrix encoding devices and methods
US5223701A (en) * 1990-10-30 1993-06-29 Ommiplanar Inc. System method and apparatus using multiple resolution machine readable symbols
US5541396A (en) * 1991-07-19 1996-07-30 Rentsch; Frederic Method of representing binary data
US5465238A (en) * 1991-12-30 1995-11-07 Information Optics Corporation Optical random access memory having multiple state data spots for extended storage capacity
GB9315126D0 (en) * 1993-07-21 1993-09-01 Philips Electronics Uk Ltd Opto-electronic memory systems
US5521366A (en) * 1994-07-26 1996-05-28 Metanetics Corporation Dataform readers having controlled and overlapped exposure integration periods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9743730A1 *

Also Published As

Publication number Publication date
AU3063397A (en) 1997-12-05
AU712943B2 (en) 1999-11-18
CA2253610A1 (en) 1997-11-20
WO1997043730A1 (en) 1997-11-20
JP2000510974A (en) 2000-08-22
CN1220019A (en) 1999-06-16

Similar Documents

Publication Publication Date Title
JP4570834B2 (en) Method and apparatus for acquiring high dynamic range images
KR100869441B1 (en) Image filters and source of illumination for optical navigation upon arbitrary surfaces are selected according to analysis of correlation during navigation
KR100942081B1 (en) Finger sensor apparatus using image resampling and associated methods
US5648660A (en) Method and apparatus for reducing noise in a radiation capture device
JPH08265654A (en) Electronic image pickup device
AU712943B2 (en) Alignment method and apparatus for retrieving information from a two-dimensional data array
WO1997005466A1 (en) Imaging system transfer function control method and apparatus
GB2045523A (en) Printing of micro-circuit wafers
JPH0364908B2 (en)
MX2007002073A (en) Image pickup device, image pickup result processing method and integrated circuit.
JPH05274359A (en) Method and device for &#34;consensus synchronous type&#34; data sampling
EP1367537A2 (en) Calculating noise estimates of a digital image using gradient analysis
WO1995027369A1 (en) Imaging system employing variable electrode geometry and processing
KR100578182B1 (en) Apparatus and method for post-processing hologram data reproduced from medium in holographic system
JP5809627B2 (en) System and method for acquiring a still image from a moving image
EP1046132A1 (en) A method and a system for processing images
Toczek et al. Scene-based non-uniformity correction: from algorithm to implementation on a smart camera
Chen Fundamental algorithms of space-variant vision: Non-uniform sampling, triangulation, and foveal scale-space
CN115311697A (en) Method for image processing circuit and sampling circuit thereof
JP2003309856A (en) Imaging apparatus and method, recording medium, and program
CN112415511A (en) Method for removing ground waves from ground radar signals by shallow layer based on wavelet transformation
EP1433120B1 (en) Architecture for processing fingerprint images
US20190346598A1 (en) Imaging systems and methods with periodic gratings with homologous pixels
KR100739311B1 (en) Method for detecting pixel in holographic data storage system
US20230268921A1 (en) Focal Plane Arrays Using an Orthogonal Code Readout

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19981030

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20001201