WO1996014707A1 - Automatic check handling, using sync tags - Google Patents

Automatic check handling, using sync tags Download PDF

Info

Publication number
WO1996014707A1
WO1996014707A1 PCT/US1995/014596 US9514596W WO9614707A1 WO 1996014707 A1 WO1996014707 A1 WO 1996014707A1 US 9514596 W US9514596 W US 9514596W WO 9614707 A1 WO9614707 A1 WO 9614707A1
Authority
WO
WIPO (PCT)
Prior art keywords
bits
image
document
tag
imaging
Prior art date
Application number
PCT/US1995/014596
Other languages
French (fr)
Inventor
Gerald R. Smith
George E. Reasoner, Jr.
Daniel R. Edwards
Debora Y. Grosse
Robert C. Kidd
Original Assignee
Unisys Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisys Corporation filed Critical Unisys Corporation
Publication of WO1996014707A1 publication Critical patent/WO1996014707A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3225Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
    • H04N2201/3226Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of identification information or the like, e.g. ID code, index, title, part of an image, reduced-size image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3274Storage or retrieval of prestored additional information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3278Transmission

Definitions

  • This invention relates to the automatic handling of checks and like documents and, more particularly, to a method and apparatus for modified reproducing of such while using "sync tags".
  • deriving information therefrom will, at times, want to lift data (e.g., image portions) therefrom.
  • data e.g., image portions
  • Such data can be automatically, electronically processed and rearranged. So doing is a general object of this case.
  • Fig. l.D there is shown a financial document sorting system having a typical document sorter 12, which in the preferred embodiment of this invention, comprises a model DP1800 sorter which is manufactured by the UNISYS Corporation of Blue Bell, Pennsylvania.
  • Sorter 12 contains a track 14 along which a plurality of financial documents 16 (e.g., checks) passes.
  • Sorter 12 includes a magnetic character reader 18 and magnetic strip character controller 20, as well as a document holder 22 and a pipelined image processor (imaging station) 24.
  • Controller 20 is coupled to reader 18 via signals on a bus 26, to a host computer 28 by signals on a bus 30, and to the pipelined image processor 24 by signals on a bus 32.
  • a computer 28 is coupled to an image storage module 34 by signals on a bus 36, while image storage module 34 is also coupled to the pipelined image processor 24 and to a plurality of workstations 38 via signals on a buses 40 and 42, respectively.
  • documents 16 sequentially pass reader 18 which reads a typical code appearing upon the usual MICR codeline strip which is normally placed upon each of the documents 16.
  • the code read-out is then sent to computer 28 by signals on bus 30 for storage therein, and also to processor 24 by signals on bus 32.
  • imaging station 24 which creates a digital electronic image of the document, and sends this processed image data, via signals on bus 40, to image storage module 34 for storage therein.
  • each document is then sorted, by sorter 12, in the usual way (based on the contents of the MICR codeline) and is held at document holder 22.
  • workstations 38 may sequentially request document image data from storage module 34. This image data is then downloaded to a workstation 38, via signals on bus 42, along with associated magnetic code data obtained from host computer 28.
  • the aforementioned document sort system 10 substantially eliminates manual handling of an individual document 16, once its associated dollar amount is so verified and inscribed, to thereby increase the efficiency, speed and timeliness of the overall document sorting system 10.
  • PCBA shown in Fig. lC. Compression, etc. :
  • front and rear images of a document are captured, enhanced, and compressed by two independent mechanisms. Following compression, the front and rear images are combined with additional information specific to the document (previously received from the document processor), and stored in a database, separate from the document processor, for subsequent retrieval.
  • a hardware/software failure can cause the electronic images in the (front and rear image) processing stages to become unsynchronized; e.g.. Typically, because one or more images are skipped on one side; or, the front and rear image bits may be synchronized with one another, but may not be synchronized with ancillary document information ('collateral document data" that is associated with the image bits). If this condition goes undetected, then the front and/or rear image bits will not be stored with the proper document record in the database.
  • identification-bits should be carried with the image data through the various stages of processing, so the image's identity can be maintained at each processing station, and can be transferred to a downstream processing station.
  • identification-bits should be available at the point where the front and rear image data, and "collateral document data” are merged to ensure that a full, correct data set is being combined for transfer to the database.
  • a salient object hereof is to so identify image data; especially with “sync bits” as detailed below.
  • Another object hereof is to preprocess such data and then compress it.
  • a related object is to do so for imaging data which is presented in a multi-bit data stream, for compressing and reducing the bits, and sending the results to utilization means, preferably in an arrangement which comprises a preprocessing (buffer) stage for digitizing and scaling, then presenting the data stream in two parallel bit- streams, plus a compression stage providing two parallel like compression paths for the in/out streams via a prescribed first compression-processing and then via a prescribed second compression-processing to provide a prescribed time-compressed output to the utilization means.
  • a related general object is to execute such compression in "real time” and according to JPEG standards.
  • Another general object is to yield better compression of a data stream, along with normalizing and scaling it, while using "sync tag" identifiers.
  • Fig. ID is a generalized block diagram of a typical document processing (sorting) system
  • Fig. IB is a block diagram indicating the transfer of "sync-tag" data through such a document processing system
  • Fig. 1C is a like showing of a dual-path Histogram/Compressor unit thereof;
  • Fig. 2A is a block diagram of image processing portions of such a system indicating exemplary use (flow) of "sync tag" data according to this invention, while Fig. 2 illustrates the functions of Digitizing, Normalize/Scaling (N/S) and Processing/Compression of an exemplary document;
  • Fig. 3 is a block diagram illustrating a single Processing/Compression path in such a system;
  • Fig. 1A is a very general block diagram of part of a document sorter system, while Fig. 1 is a block diagram of a document imaging module embodiment for such a system;
  • Fig. 4 is a diagram of eight related processing paths
  • Fig. 5 is an item-related diagram of a subdivided Input Buffer for the array in Fig. 3, while Figs. 6 (6A-6D) indicate how the item in Fig. 5 is preferably addressed and processed;
  • Fig. 7 similarly illustrates how the top/bottom of the item are detected for such processing in two four- channel streams
  • Figs. l'-5' depict a related sealer embodiment; and Figs. 8-10 relate to the "sync tag", i.e.:
  • Fig. 8 is a plot of related exemplary processing signals and video/status data
  • Fig. 9 details the makeup of a preferred Normalizer/Scaler Unit.
  • Fig. 10 gives the makeup of a preferred JPEG Compressed Data Buffer.
  • FIG. 1A will be understood to depict part of a
  • Financial Item Sorting System 10 having a typical Item Sorter (e.g., in the preferred embodiment of this invention, a Model DP 1800 sorter which is manufactured by the Unisys Corporation of Blue Bell, PA).
  • a typical Item Sorter e.g., in the preferred embodiment of this invention, a Model DP 1800 sorter which is manufactured by the Unisys Corporation of Blue Bell, PA.
  • Sorter 12 contains a Track for transport of checks or like Financial Items to be processed, e.g., by a typical reader and related controller. Additionally, the Sorter contains a Camera Assembly, and an Image Processor 24 apt for use in accordance with the teachings of the preferred embodiment of this invention.
  • Controller 20 is coupled to the Reader and to a Host Computer 28, as well as to Image Processor 24, as known in the art.
  • An Image Storage Module 34 is also coupled to the Image Processor 24 and to a plurality of Workstations ws.
  • the Camera Assembly is coupled to the Image Processor 24.
  • checks or like Items sequentially pass the Reader which can read a typical magnetic code thereon.
  • the code read-out is then sent to Computer 28, for storage therein and to Image Processor 24.
  • Image Processor 24 As each Item travels along the Track, it passes the Camera System which captures an image thereof and outputs a digital representation of the image to the Image Processor 24.
  • This digital representation comprises a plurality of image pixels having an intensity which can be represented by an "intensity-number" between 0 and 255.
  • Image Processor 24 then processes this digitized image and sends the associated signals to the Image Storage Module 34 for storage therein. After this, the documents are then sorted and stored in the usual way.
  • an operator at one of the Workstations may request the aforementioned image bits from Storage Module 34. These bits are then downloaded to his Workstation along with their associated identifying magnetic code data obtained from Host Computer 28.
  • an operator may electronically enter associated data (e.g., check dollar-amount) with a given document-image and electronically resolve any difficulties; (e.g., an error in reading the magnetic code thereon) entering and storing the needed correction for the image.
  • associated data e.g., check dollar-amount
  • Each digitized image and its associated data and corrections then form a single corrected, completed electronic computerized record which can then be sent to Storage Module 34. Thereafter, it may be accessed for use in automatically inscribing associated data (e.g., dollar amount, corrections) upon the stored Items.
  • item Sorting System 10 substantially eliminates the manual handling of Items 16 when an associated dollar amount is placed thereon, thereby increasing the efficiency and timeliness of sorting and processing.
  • JPEG processing/compression Stages are placed one of "n" JPEG processing/compression Stages according to a feature hereof.
  • Two of these JPEG processing/compression paths are preferably implemented on a Histogram/Compression printed circuit board assembly (PCBA) as shown in Fig. 1C.
  • PCBA Histogram/Compression printed circuit board assembly
  • Image Processor 24 of Fig. 1A is preferably characterized by the following: an Image Digitizer Unit (D of Fig. 2) for analog to digital conversion of the captured image, a Normalizer/Scaler (N/S Unit Fig. 2) for normalization and scaling of the video image, a set of "n", parallel JPEG Processing/Compression Units (Jl etc. of Fig. 2 and 24-A of Fig. 1A) for image processing/JPEG compression and a JPEG Compressed Data Buffer Unit (JCDB in Fig. 2) for collection and temporary storage of compressed images from the JPEG Processing/Compression Units.
  • JPEG refers to the compression standard by the "Joint Photographic Experts Group” .
  • the JPEG compression hardware performs image processing on a 128 grey level, scaled image before executing a two-pass JPEG compression. Scaling is nominally at 137.5 dpi but can range from 137.5 dpi to 50 dpi in steps of 12.5 dpi.
  • This two-pass compression is designed—according to this feature—to reduce images to a predictable "packet size” apt for use in the entire High- Speed Check Imaging System.
  • These functions of the JPEG "P/C" (Processing/Compression) hardware, (detailed below) must be performed, here, in real time on check images as they move down a high-speed Check Sorter Track at an approximate rate of 1800 checks per minute.
  • JPEG compression of normalized and scaled images of documents captured in a Check Sorter at an average rate of 1800 checks per minute.
  • the diagram in Figure 2 indicates conditions under which each JPEG "P/C" path preferably operates and the performance required of such a unit to maintain overall system performance.
  • Figure 2 shows the processing of a sample of check images as they move left to right across the page, simulating the way a check would move through a Check Sorter.
  • track speed of sorter assumed to be 300 inches per second; this means that a check 6 inches long will take 20ms to pass a fixed point on the Sorter Track.
  • checks can range in length from 5.75 inches to 9 inches (19ms to 30ms), with inter-checks-gaps ranging from 1.5 inches (5ms) to several inches.
  • the check images are captured by camera means, here preferably comprised of a vertical, 102 -element CCD array which samples 256 grey levels per pixel (8 bits) with a resolution of 200 pixels per inch. In the vertical direction, the Camera can capture images up to 5.12 inches high.
  • the 1024 element array takes a snapshot of the check every 16.66us as it moves down the Sorter Track, yielding a horizontal capture resolution of 200 pixels per inch.
  • These 1024 pixel scans (captured every 16.66us by the CCD array) are divided into eight 128 pixel Channels (shown as CH 0 through CH 7 in Fig. 2, each composed of 128 pixel scans).
  • Hardware in the Digitizer D converts each 128 pixel scan into eight serial streams of pixels, with one pixel being output approximately every 130ns.
  • the N/S (Normalizer/Scaler) hardware next normalizes the pixel values from the 1024 CCD elements and then scales the image down.
  • the maximum resolution after scaling is 137.5 (ll/16ths scaling of 200 dpi captured image) pixels per inch in both dimensions (e.g., see example shown in Fig. 2—and see Figs. l'-5' described below) .
  • the 128 pixel scans in each Channel are reduced to 88 pixels per scan.
  • the N/S hardware "time-multiplexes" four channels' worth of data onto two, 8-bit serial outputs to the JPEG "P/C" hardware.
  • the 88 pixels from all four "even numbered" (total of 352 pixels per scan at 137.5 dpi) Channels (0, 2, 4, 6) are time-multiplexed along one serial stream, while the pixels from the four "odd” Channels (1, 3, 5 and 7) are multiplexed along a second serial stream.
  • the two serial pixel streams operate at 50ns/pixel (20 MHz) to guarantee that all 352 pixels per scan on each serial path can be transferred to the JPEG "P/C" hardware before the next scan is transferred.
  • a pair of JPEG "P/C" paths are preferably implemented on an H/C PCB (Histogram/Compressor Printed Circuit Board, as indicated in Fig. IC).
  • H/C PCB Histogram/Compressor Printed Circuit Board, as indicated in Fig. IC.
  • This H/C PCB must detect the image dimensions and perform image processing on the scaled image prior to compression.
  • Selected image processing algorithms require a grey level histogram of the entire image prior to execution. This means the entire image must be buffered (e.g., at 3-1, Fig. 3) and a histogram generated (e.g., at 3-7) before image processing can begin. Once image processing is complete, compression can begin.
  • the performance of the entire image system is what dictates how the JPEG Processing/Compression hardware will reduce each image to a target packet size; this is why the here-detailed JPEG Compression hardware embodiment executes a 2-pass compression.
  • the first pass uses a "standard" QM (Quantization Matrix) for JPEG compression.
  • QM Quantization Matrix
  • the results of the first pass compression, as well as the detected image dimensions, are used to pick a second QM for a second, final compression that will reduce the scaled image to the desired compressed packet size.
  • the JPEG Processing/Compression hardware must perform all these functions in real time which equates to generating a JPEG compression packet in 20ms for a 6-inch check. Because a single JPEG "P/C" path cannot meet these requirements, multiple paths operating in parallel are required.
  • the described H/C PCB was equipped with two independent JPEG "P/C" paths for this purpose (see Figs. 3, IC; preferably, the system has locations for up to 4 H/C PCBs, for Front/Rear imaging, this means the system can have as many as 8 JPEG compression paths operating in parallel; e.g., two for each H/C PCB—i.e., a pair on each side).
  • Fig. 4 indicates how two JPEG processing/compression paths are implemented on one H/C PCBA and how up to 4 H/Cs can be used on each side (front and back) of the imaging system.
  • the JPEG Image Module is a device that in conjunction with a document processor allows image data from the front and rear of documents to be collected, compressed, packaged and (finally) transmitted to a mass storage device.
  • the document processor is a device which is capable of transporting documents down a track, reading data encoded on the documents and providing that data to an application program running on a host computer and (finally) sorting documents to bins based upon that information.
  • the IM provides outputs to alternative recognition devices such as a Courtesy Amount Reader (CAR) unit (1-23, Fig. 1) and mass storage devices such as a Storage Retrieval Module (SRM: 1-20, Fig. 1).
  • CAR Courtesy Amount Reader
  • SRM Storage Retrieval Module
  • Images can be retrieved from the SRM for data entry, or data correction.
  • the new, augmented or corrected data is then passed on to the application mainframe.
  • Data read by the CAR unit is also returned to the application mainframe. In this way the document data base can be more economically updated and corrected for use by the application program.
  • Fig. 1 is a block diagram of major functional portions of such an imaging module IM, apt for use with this invention.
  • module IM is used, as a printed circuit unit (board) for imaging/processing of images from one side of passing documents, with a second, like board (not shown) used for the other (Rear) side.
  • Module IM produces compressed 8-bit gray level (electronic) images (image data) at resolutions of about 100 pixels per inch.
  • Image data is compressed using a "JPEG Baseline Sequential technique.
  • JPEG is a family of gray scale image compression techniques adopted by the International Standards Organization (ISO) Joint Photographic Expert Group (JPEG) subcommittee, as workers realize.
  • the compressed images are encapsulated in a Tagged Image File Format format (TIFF).
  • TIFF is an industry standard format for the encapsulation of image data, as workers realize.
  • an electronic camera Unit 1-1 captures successive images (e.g., of passing checks or like financial documents, front and rear, suitably illuminated; e.g., lamp illumination controlled by Unit 1-3).
  • the camera uses a segmented linear "Charge Coupled Photo Diode” (CCPD) array ("segmented” meaning that the linear array of photo-diodes is subdivided into 8 segments or channels, each of which has its own output, thus effectively multiplying by the number of subdivisions the rate at which image data can be clocked-out—here the channels are numbered 0 through 7) .
  • CCPD Charge Coupled Photo Diode
  • the electronic image from each channel is digitized by an Image Digitizer 1-5 and presented, in parallel, to a pair of like Normalizer/Scaler (N/S) units; a master N/S 1-7 and a slave N/S 1-9 (e.g., on separate boards, each for identically processing four video data channels—e.g., four "even” numbered channels on Master, four odd on Slave). These channels are merged into a single output channel for compression (as further detailed below). Their output, the scaled-normalized image signal path, is indicated in Fig. 1.
  • the transfers of data between the Digitizer, N/S, and other functions are more diagrammatically illustrated in Fig. 2, while processing/compression is further, similarly illustrated in Fig. 3.
  • An additional output of normalized (unsealed) image data comes from each N/S board to provide image data to a CAR system 1-22, 1-21, 1-23.
  • the output of the two N/S boards is presented to a Histogram/Compressor (HC) array preferably comprising from two to four like units (boards) 1-11A, 1-11B, 1-llC and 1-11D operating in parallel.
  • the number of boards depends on required throughput (in documents per second) as determined by the document processors throughput and the type of error detection required.
  • Each unit 1-11 processes and compresses image data from N/S units 1-7, 1-9 and implements two like independent data paths, each of which can process an entire single image.
  • the two paths on the H/C board can independently process and compress separate images or can compress synchronously the same image and compare compression result to detect f ilures.
  • the sequencing of image data into, and compressed data from, the H/Cs is controlled by a state machine in the JPEG. Compressed Data Buffer (JCDB) 1-15.
  • JCDB Compressed Data Buffer
  • the JCDB When the JCDB is ready to accept data from that path, it signals that path to output data for storage on the JCDB.
  • a signal PDOCPRES_N which is activated when the N/S detects that a document is in front of the camera is applied from the N/S units to JCDB 1-15, whose state machine uses it to sequentially allocate images to H/C paths.
  • Buffer JCDB 1-15 is coupled to a Main Processor (MP) 1-16 which packages the JPEG compressed image data into the TIFF format employed by the system.
  • MP Main Processor
  • Fig. 1 The compressed data is transferred from the MP 1-16 to the Point-to-point Optical Link (POL board 1-19 which is a fiber-optic interface) for transmission to the Storage and Retrieval Module (SRM) 1-20.
  • the SRM stores the images and, under the direction of the application mainframe, manages the TIFF packaged compressed image files for later distribution to image workstations (not shown) for viewing, correction or entry of associated document data.
  • the Diagnostic Transport Interface (DTI) 1-17 is one end of the interface between the document processor and imaging module IM.
  • the Image Interface Board (IIB) 1-18 resides in the document processor and sends document data and control signals rom the document processor to the DTI .
  • the DTI receives this data and passes document data, across the Main Bus, to MP 1-16 for inclusion in the TIFF packet for the associated compressed image.
  • the DTI also receives control signals from the document processor's tracking logic through the IIB 1-18 that indicate that a document is approaching the camera. These signals are passed on to N/S boards 1-7, 1-9 where they are used to prime the edge detection circuits to detect the leading and trailing edges of do ⁇ iments passing the camera (along the track).
  • the Character Recognition (CAR) subsystem 1-21,1-22 and 1-23 subsystem consists of two circuit boards: (see CAR Preprocessor or Bypass 1-22, CAR Port 1-21 and the CAR Unit 1-23).
  • CAR Preprocessor or Bypass 1-22, CAR Port 1-21 and the CAR Unit 1-23 are linked to CAR subsystem.
  • Both boards perform specialized image processing functions on the normalized image data from the N/S boards, they improve the chances that the Character Recognition Unit (CAR Unit) 1-23 will successfully read information from the document.
  • the processed image data is sent to the CAR unit for recognition of data.
  • the results of reading the data are returned to the document processor for inclusion in the data files stored on the application software mainframe.
  • the information read successfully from the documents can be used to correct the data files on the mainframe.
  • Module IM can be processing, compressing or storing images and be documenting data for as many as 25 images at any one time. It is important to make sure that the document's images and associated document data from the document processor remain in synchronization so documents are not misidentified.
  • each document is detected by tracking logic (see below) and assigned a tracking identification code ("sync tag").
  • the sync tags are assigned sequentially to each image (e.g., by Software in DTI) to identify it and prevent its loss, as well as help later in sorting. It preferably is triggered on detection of a check (document) leading edge and comprises a 16-bit identifier code (for image, -frame etc.) assigned in FIFO order; preferably supplied by the DTI unit.
  • the document processor tracking logic is used to determine the physical location of a document in the track.
  • the document processor also has subsystems, such as Magnetic Ink Character Recognition (MICR), readers that may generate data that is subsequently associated with the document moving through that subsystem.
  • Module IM receives the sync tags and document data and queues them up in the memory of Main Processor (MP) 1-16, in a first in first out (FIFO) fashion; likewise the sync tags are stored in a FIFO queue on the NS.
  • MP Main Processor
  • FIFO first in first out
  • the tracking logic senses it and a signal is sent to the IM to alert the NS to find the leading edge of the document.
  • the document When the document is found it activates the PDOCPRES_N signal, thus alerting the H/C units to the forthcoming image data.
  • the trailing edge of the document When the trailing edge of the document is detected by the tracking logic it sends a signal to module IM to alert the NS to find this trailing edge.
  • the document's sync tag see Fig. 3A is pulled from the queue in the N/S and attached to the "document status field" (which is appended to the trailing end of the related image data) .
  • processing image data for the other side proceeds in a similar manner—preferably with a separate independent camera, digitizer, NS, HC, and JCDB boards.
  • the sync tags are used in module IM in the following ways to assure that image data from a given document are kept "in sync" (i.e., in sequence, same for any given document) : — When the NS finishes processing an item, the
  • Diagnostic Transport Interface checks to make sure that the sync tags from both the front and rear master NS are the expected "next tags" (in sequence) and that they are identical.
  • the H/Cs use the sync tags to assure that the image data from the master and slave NS units are identical, assuring that these boards are "in synchronization" (i.e., handling data from same document) .
  • the HCs compress the image, then the sync tags are passed along with the compressed-image-data to the JCDBs, where they are queued-up (in FIFO fashion) .
  • the JCDB interrupts the MP when it has image data available.
  • the MP reads the image data and status bits (which include the sync tag) from the JCDB, and checks to see that the sync tag from the image read from the JCDB matches the expected sync tag as stored in its own memory queue. — The sync tag is also fed to the "CAR
  • a "Stop Flow Request” is generated by DTI,, JCDB, N/S, H/C or ID when they detect an "error condition”: e.g., sync tag mismatch, or incorrectly matching document data with image data, or detection of conditions that would corrupt or render an image unusable.
  • the JCDB detects such an SFR signal and interrupts the MP (e.g., before check processing is carried further); e.g., JCDB detects the SFR and latches the source thereof in a Stop Flow Accumulator (contained on the JCDB) for interrogation by the MP.
  • Stop Flow Requests will reduce the number of documents, time, and complexity that would later be involved in "error recovery” .
  • a "leading edge not found" fault might be detected at NS unit (signal from the tracking logic received, alerting the IM to arrival of a document but the edge of the document cannot be determined from the image data).
  • the N/S unit would issue a SFR to processor MP through the JCDB, terminating further processing of all images from this document—as well as of other items upstream of this item in the document processor's track.
  • the MP first notifies the document processor to stop the flow of documents, and then finishes handling the image data it has confidence in (e.g., compressed images downstream of the point of fault detection).
  • the mishandled document, and any documents that followed it through the transport, will have to be recovered from the document processor's pockets and repassed through the document processor on resumption of processing.
  • This system may be set up for various Stop Flow requirements (according to the application involved) and to automatically, programmatically "mask" certain faults accordingly. For example some application could require a SFR if the trailing edge of a document is not found, yet other applications could be more forgiving and not require a SFR for this situation.
  • Stop Flow feature reduces the number of documents that have to be repassed through the document processor when a SFR occurs because the processor is notified immediately when an error condition generates a stop flow request, and because it is associated with a particular item or event; thus antecedent documents (conditions) can be handled normally, thus reducing the number of documents that must be specially handled to recover from an SFR event.
  • Monitor &*-**TM t Fig. It
  • the design of module IM also accommodates adding an optional Camera Quality Monitor unit (CQM) 1-13.
  • CQM Camera Quality Monitor unit
  • the CQM monitors normalized image data and normalize scaled image data from the NS boards, as well as compressed image data being sent from the HCs to the JCDB.
  • one CQM monitors data on the IM for the front image, the other monitors data for the rear image.
  • a variety of problems associated with the camera can be detected by analysis of the data collected at these points. As one example: when components in illumination system 1-3 age, lamp output may dim. Monitoring the normalized data and checking for a long term change in average document brightness, can allow one to notify service personnel that replacement or adjustment is required before the images are badly degraded.
  • the front and rear images are each monitored by their own CQM.
  • the Normalizer/Scaler function is, as above noted, preferably implemented by a two board set (e.g., see Fig. 1: Master 1-7 processing the four even-numbered channels from Digitizer 1-5; Slave 1-9 similarly processing the four odd-numbered channels therefrom) . These boards 1-7, 1-9 operate in synchronism, with one, the master, arbitrating. Master N/S Unit 1-7 also provides "document synchronization" signals (such as PDOCPRES_N) used by downstream boards to identify that a document is being imaged. The N/S pair also provide image data to the CAR unit.
  • Fig. 1 Master 1-7 processing the four even-numbered channels from Digitizer 1-5; Slave 1-9 similarly processing the four odd-numbered channels therefrom
  • These boards 1-7, 1-9 operate in synchronism, with one, the master, arbitrating.
  • Master N/S Unit 1-7 also provides "document synchronization" signals (such as PDOCPRES_N) used by downstream boards to identify that a document is being imaged.
  • the N/S units employ a scan capture circuit to capture pre-normalized image data for use in generating the numeric tables required to normalize the image data during normal operation.
  • each N/S Upon a command from the DTI, each N/S starts collecting pre-normalized image data; each channel has its own capture circuit capturing 16 consecutive scans of image data (128 bytes of data per scan) into a first in first out (FIFO) memory that can be read by the DTI board for transfer again to the MP for processing.
  • FIFO first in first out
  • Data is collected as part of a calibration procedure in which the camera images a uniformly white target, and then a black target, to provide "ideal" white and black stimuli to the camera.
  • Software running on the MP executes an algorithm that transforms this raw data into data tables suitable for normalizing image data.
  • the data tables are transferred from the MP to the DTI and then stored in "Look Up" tables on the N/S.
  • the N/S normalizes incoming image data using the information in the look-up tables (preferably two 64K X 8 RAM, one for test). Normalization is accomplished by using the image data and its position in the scan to sequentially address the look-up table. The content of each address has been precalculated by the normalization software running on the MP from pre-normalized data (collected during calibration) to be the normalized value of the image data for that pre-normalized value and position in the scan. There are 128 possible output values for each of the 128 pixel positions in the scan.
  • Scaling is preferably based on PROM look up tables; the tables allow the selection through software running on the MP of up to 8 "scaling factors", (from a factor of 11/16), to a factor of 5/16 along with method of scaling.
  • the preferred scaling is 8/16 (1/2) using a 2X2 pixel window average method.
  • Document Edge Detection is performed with an algorithm whereby each channel compares the average brightness of "the present scan line with the average brightness of the transport with no document present" . When no document is present, the detector averages and stores the average brightness of the transport ("no document brightness”). Document tracking logic in the document processor notifies these circuits that a document is about to enter, or leave, the range of the camera, and that comparisons should begin. If a significant change of brightness occurs in a channel, then (by this) that channel indicates that it has found an edge. Leading edges are found when any one of the 8 detectors finds that brightness has increased above the stored "no document average” by a preferred threshold of 14 gray levels, and PDOCPRES_N is asserted.
  • Trailing edges are declared if all 8 channels have found that they have returned to within a preferred 18 gray levels of the "no document" average; then PDOCPRES_N is cleared. If a leading edge is expected and not found by any detectors after a prescribed time, then "leading edge indication” is "forced” by asserting PDOCPRES_N. This implies that the document image may have problems, so this occurrence is flagged in "document status”. The status data is transferred along with the normalized and scaled (and later compressed) image data as it moves through the system.
  • the Histogram/Compressor functions are, as noted above, preferably performed in two like HC boards per side.
  • Module IM accommodates up to 4 HC boards per side if height throughput or increased levels of fault detection so require.
  • Each H/C contains two image paths, and each path is capable of processing and compressing an entire image as received from the master and slave NS boards.
  • the JCDB's state machine determines that an image has arrived via the PDOCPRES_N signal and selects the HC path to receive the images from the master and slave NS; paths are assigned in a rotating sequence and according to the type of fault detection required.
  • the preferred configuration uses rotating redundancy to periodically verify HC operation; thus paths can be assigned in the following order: 0-1, 2,
  • the HC boards contain logic that detects if the output data of the two paths does not match (when they are both processing the same image); such a detection is an indication that a fault has occurred on one of the two paths. "Rotating redundancy” is useful for checking for "hard failures”. If "transient failures" are of concern to a particular application, then additional HC boards can be added to the system and image data can be processed fully redundantly. As a feature hereof, this HC unit allows comparison of two compressed images in their entirety (e.g., vs by-channels).
  • a HC path receives the combined data from the master and slave NS boards and combines the odd and even channel data into one image which is stored in a buffer memory (3-1, Fig. 3).
  • a histogram width-detector 3-5 As the data is being input to the memory, it is sampled by a histogram width-detector 3-5, and a top and bottom edge detector 3-3.
  • the histogram circuit 3-7 builds a histogram of image data values composing the document image. The histogram data is used to modify the image (as described later).
  • the top and bottom edge detectors 3-3 sample the image data and attempt to find the top and bottom of the document image (within the full height of the input data scan).
  • the preferred algorithm looks for the highest and lowest occurrences of "non-background data" within all the scans of a document.
  • a "non-background” is declared if there is a gray level transition within 5 pixels.
  • the points determined for top and bottom, and width, are made available to the Digital Signa, Processor (DSP 3-15) which uses them when pulling the document image from the buffer. Finding the width (e.g., lead and trail edges) and the top and bottom of images allows the Digital Signal Processor 3-15 (DSP) to handle only genuine document data and ignore all else thus improving throughput and providing images that do not have extraneous data associated with them.
  • the software running on DSP 3-15 executes an algorithm which uses the histogram data to alter the original image data in the HC memory in a manner that reduces contrast and stretches gray levels thereby improving legibility and compression characteristics (by means of the Remap 3-13).
  • the HC compresses images in "two pass” fashion.
  • the first pass preferably uses a JPEG Quantization Matrix (QM) .
  • QM is a set of numeric factors used by the JPEG algorithm to affect image compressibility and is known to workers.
  • the preferred (or target) QM is picked in the course of IM design to optimize both image quality and the size of the resulting compressed image packet.
  • the same QM is used for first pass compression of all images.
  • the size of the compressed image data packet is checked, along with image size. If the packet is too large (considering the size of the image), an alternative QM is used for "second pass compression"; the alternative picked is determined from experience and is embodied in the algorithm executing on the DSP. If the packet is smaller than expected, a less aggressive QM may be used; otherwise the target QM is reused.
  • the HC builds a standard JPEG compressed image structure in Output Buffer 3-25 and appends to it the status bits received from the NS 3-9, along with its own status bits and other information.
  • a typical such HC unit (e.g., 1-11A for bits 0, 1) may do the following:
  • Buffer JCDB 1-15 (Fig. 1) provides the interface between the HC paths and the MP 1-16. It preferably comprises a set (e.g., 16 X 128 kilobyte) of compressed image buffer units, preferably operated as eight redundant buffer pairs (Primary-, Secondary-). The contents of the primary buffer is compared to the contents of the secondary buffer, upon readout, as a method of validating hardware performance. These buffers are directly readable and writable by the MP via a local bus extension of the microprocessor of the MP. Should extra buffering capacity be needed, (e.g., the POL is busy), the JCDB can fall out of redundant operation and use its 8 secondary buffers to store images.
  • the JCDB has a state machine which controls the selection of NS-to-HC paths for input of images from the NSs and the transfer of compressed image packets out of the HCs to the JCDB.
  • the MP can program the state machine on the JCDB to operate module IM with 2, 3, or 4 HC boards.
  • the JCDB also receives the Stop Flow Request lines from all boards. If any of these lines is activated the JCDB notifies the MP.
  • Each of the NS, HC, or JCDB boards in module IM can have any of their detected fault conditions activated, by command from the MP, to trigger a "Stop Flow request" line to the JCDB. Results of this activation were discussed earlier. Functions of JCDBi
  • HC units can be programmed to operate with 2, 3 or 4 HC boards; also controls run-time testing of HC units by selecting both of its paths to process image data. Also, monitors "busy” and “ready” signals from HC units to accommodate document throughput;
  • Image Digitizer ID 1-5 receives the output of the camera in the form of 8 pairs of analog signals (e.g., 8-odd, 8-even pixels from 8-segment CCPD) plus a clock signal, and a "camera side" identification (front vs rear).
  • the "side information" is determined at time of manufacture by installing a jumper.
  • this ID output comprises: eight channels of serial, digitized video, with parity for each channel.
  • the preferred clock rate is 8.125 MHz, with a 900ns idle period between every scanline (16.7us).
  • Digitizer ID also has a diagnostic memory that can be loaded with test patterns by the DTI over a special (C and D) bus.
  • the DTI senses this and commands a diagnostic RAM to inject test patterns into the eight channels. Identical data will be output on all 8 channels. These patterns are used by the NS to test normalization and scaling logic (by comparing data); they are also similarly used to test the interface between the ID and NS boards.
  • the NS normalization circuit must alter its operation when test patterns are present on the channels to assure that the outputs of the multiple normalizer circuits are identical.
  • the test portion of the normalizer look-up table is used to provide a 1 to 1 mapping when test patterns are activated, thus assuring that "data-in” matches "data-out".
  • FIG. 3 (as aforenoted) indicates the preferred functions performed along a single JPEG processing/compression path on a preferred H/C PCB. Two independent JPEG processing/compression paths are implemented on each H/C PCB. These blocks are characterized briefly as follows: a - N/S Inputs:
  • the JPEG compression path receives
  • N/S Normalized Scaled
  • Each path contains 4 Channels of time- multiplexed, scaled scans; scaled-down to a maximum of 137.5 pixels per inch.
  • the pixels are delivered cross both paths at a rate of 50ns per pixel (20 MHz).
  • Each pixel is 7 bits representing 128 grey levels with the MSB ("Most Significant Bit") of the 8- bit path set to 0.
  • b - Input Buffer, 3-1 The entire scaled image is stored in a lMeg x 8 Buffer 3-1 as it is delivered from the N/S hardware.
  • Top/Bottom Detect Circuitry 3-3 finds the Top and Bottom of the scaled image.
  • the picture shown in Fig. 3 illustrates an exaggerated skew condition that may occur (usually to a much smaller degree) as the sorter moves the check past the CCD camera. 5
  • the Top/Bottom Circuitry "finds" the extreme
  • Histogram Circuit 3-7 samples every other pixel (one pixel every 50ns, switching between both signals paths from the N/S hardware) and sorts the 128 grey level
  • the final Histogram will have sorted 50% of the scaled image's pixels, selected in a checkerboard arrangement across the entire image, into the 64 bins that comprise the Histogram 3-7.
  • Each bin can record up to 64K occurrences of adjacent grey levels and will "peg" at 64K if that bin overflows.
  • Status and other image-related information is appended to the Trailing Edge of the scaled image as it is delivered to the JPEG processing/compression path from the N/S hardware. This information is buffered in Status block 3-9 so that it can be retrieved and sent on with the compressed image.
  • g - Transpose and Block Unit 3-11 The N/S hardware delivers scaled image data in Vertical scans spanning 8 Channels and starting from either the Top or Bottom of the image (depending on camera orientation), 5 while moving vertically.
  • JPEG compression requires image pixels to be compressed in 8x8, raster-scanned. Pixel Blocks. Pixel Blocks are then scanned in the same method as pixels within a block: horizontally
  • 3-11 serves as an address to the 1 Meg x 8 Input Buffer 3-1. (See Fig. 6, also.)
  • Image processing is done by changing (Remapping at 3-13) the 7-bit scaled pixels in the Input Buffer 3-1 to 8-bit image- processed pixels as the image is being compressed.
  • a DSP Unit 3-15 (Digital Signal Processor) determines the Remap Values by reading the
  • a DCT Discrete Cosine Transform
  • quantization and Huffman coding for JPEG compression are done in a Chip Set, (3-17, preferably from the LSI Logic Corporation) .
  • This Chip Set 3-17 does not need to be “throttled” as it compresses an image, and, therefore, "compression latency" through this Chip Set is deterministic.
  • Scaled, remapped pixels are sent through the LSI Chip Set 3-17 at a 25 MHz rate.
  • the LSI output is a 32-bit word containing JPEG compressed data (but does not have bytes "stuffed” or restart codes inserted, these being inserted at Block 3-19).
  • This Counter records the amount of compressed data emerging from the LSI JPEG Compression Chip Set 3-17. This Count is used by the DSP Unit 3-15 to retrieve the results of first pass compression so a second QM can be selected for the second (final) compression.
  • the Post Processor performs the JPEG required byte stuffing and Restart Marker insertion not done by the LSI Chip Set.
  • the Post Processor funnels the 32-bit outputs from the LSI Chip Set down to a 16-bit wide
  • a table of addresses pointing to the location of Restart Markers in the Compressed Image Packet are stored in the Restart Marker FIFO 3-23 and can be used by the DSP 3-15 to truncate images on entropy coded segment boundaries.
  • the JPEG processing/compression path assembles the Compressed Image Packet in a 64K x 16 Output Buffer 3-25.
  • Compressed Image Packet is completely assembled, it can be burst-transferred to the JCDB next component (JPEG Compressed Data Buffer) in the imaging system.
  • JPEG Compressed Data Buffer JPEG Compressed Data Buffer
  • the DSP 3-15 performs the following functions: i - After an image is input from the N/S, the DSP 3-15 reads the 64-bin Histogram, executes the image processing algorithms and loads the resultant Remap Table into the Remap RAM (at 3-13);
  • the DSP loads the default QM and Huffman tables into the LSI JPEG Compression hardware;
  • the DSP retrieves the results (of first pass compression) from the Packet Count Register 3-21, and the image dimensions from the Top/Bottom and Width Detectors, then selects a new (second)
  • the DSP inserts all pertinent header information into the Compressed Image Packet in Output Buffer 3-25.
  • the DSP via software, executes the "grey level stretch and contrast reduction" algorithms by reading the 64 bin histogram of an image and generating a 128-entry remap table to be used by the Remap hardware. Also, the DSP chooses and loads the LSI chips set with the QMs and Huffman tables needed for first and second pass compression. Lastly, the DSP builds the necessary header data, (surrounding the compressed image data), and loads this header data into the Output Buffer 3-25.
  • the Histogram is generated in unit 3-7 as the image is being delivered from the N/S hardware and stored in the Input Buffer 3-1.
  • the Histogram could have been generated without any custom hardware by having the DSP unit scan the contents of the Input Buffer after the image has been loaded. This, however, would greatly increase the time required for the JPEG processing/compression path to process the image because:
  • the Histogram is available to the DSP as soon as the image has been input.
  • the Histogram circuitry samples 50% of the pixels in the image by sampling pixels every 50ns (20 MHz input from N/S) from one of the two serial N/S inputs (Channels 0, 2, 4, and 6) and then from the other (Channels 1, 3, 5, and 7). The net effect is that the pixels sampled are arranged in a checkerboard pattern across the image. As a pixel is sampled, the "upper 6 bits" of the 7-bit pixel (LSB is ignored) are used to address a 64-bin Histogram RAM. The contents of the RAM at the addressed location are extracted, incremented by 1 and reloaded into the RAM. These iterations of reading, incrementing and writing must be done every 50ns.
  • Each of the 64 bins is 16-bits wide and can therefore record 64K occurrences of adjacent grey levels (e.g., 100 0101 and 100 0100 are sorted into bin 33). If, during the "Read-Increment-Write" iterations, any of the bins become full (reach a count of OxFFFF), this bin will NOT overflow with any additional pixels (reset to 0x0000) because the bin will peg at OxFFFF. This is essential for the image processing algorithms to get a good approximation of the grey level content and distribution in the image.
  • the Input Buffer 3-1 is used to store the contents of the scaled, normalized image so that it can be transposed, grouped into 8x8 pixel blocks and sent to the compression hardware. Not only is it necessary to store the entire image for transposition purposes, but it is essential, to achieve predictable compression results, to perform "two-pass compression" on a fixed input image.
  • the Transpose and Block function (see 3-11) is used to generate the 20-bit address needed to access the IMeg x 8 Input Buffer. It is this function that first stores the image data into the Input Buffer 3-1 as it is delivered from the N/S, and then removes the data in transposed, 8x8 blocks during each of the two compression passes.
  • IMeg x 8 Input Buffer 3-1 is spatially divided. This arrangement is shown in Figs. 5, 6. Since the Input Buffer is byte-wide, each byte will hold one 7-bit pixel (MSB is 0).
  • the IMeg x 8 buffer is first divided into eight, horizontal, 128K x 8 Channel Buffers representing the 8 channels of image data from the N/S (e.g., see Figs. 5, 6A) .
  • the 3 MSBs of the 20 address lines access one of the 8 channels.
  • Each 128K x 8 Channel Buffer is divided into an
  • each block is divided into an 8x8 array of pixels with the 6 LSBs of the 20 address bits used to select the pixel within a block (Fig. 6C).
  • the Channel Buffers are arranged in an array of blocks composed of 11 horizontal rows by 186 vertical columns. Because the maximum scaling resolution acceptable by the JPEG processing/compression hardware is 137.5 dpi (ll/16ths scaling of 200 dpi captured image), the maximum number of pixels processed, per vertical scan in one channel, is 88. Since there are 8 rows of pixels per block, 11 blocks are needed to cover the vertical dimension (Fog. 6B) . One vertical column of 11 blocks consumes 704 (11 x 64) memory locations. Since there are 131072 (128K) locations in each Channel Buffer, up to 186 (131072 / 704 ⁇ 186.18) columns of 11 blocks can fit inside the memory.
  • Horizonal Block Counter 8 bits Accesses the 11 rows of blocks in each Channel Buffer Vertical Block Counter 4 bits Accesses the 186 columns of blocks in each Channel Buffer
  • Block Counter are not fully utilized (11 of 16 and 186 of 256 respectively), their combined values are remapped, via a "PROM look up table", to form a Block pointer value consisting of 11 bits. In total, therefore, 20 bits are needed to access the entire Input Buffer 3-1 (Figs. 3, 6D).
  • the hardware By controlling the five counters described above (Fig. 6D) and changing their operating parameters (when they are incremented, when they are reset or preset and whether they count up or down), the hardware is able to store a vertically scanned image in the Input Buffer 3-1 and then unload, in 8x8 blocks, starting from either of the four corners of the image.
  • the N/S hardware delivers vertical scans of pixel data in two serial pixel streams as shown in Fig. 2 (and see further details in Figs. l'-5', described below) .
  • the JPEG processing/compression hardware must store these pixel pairs at two separate locations in its IMeg X 8 Input Buffer (Fig. 5) simultaneously. For this reason, the least significant bit of the Channel Counter is not used during image capture because pixels are received in pairs (from channel 0/1, 2/3, 4/5 and 6/7 respectively) and must be stored in two separate channels of the Input Buffer simultaneously.
  • the Row Counter (Fig. 6D) is incremented. When the Row Counter rolls over (every 8 pixels), the Vertical Block Counter is incremented.
  • the Vertical Block Counter When the Vertical Block Counter reaches the scale factor (e.g., when the Vertical Block Counter reaches 11 for ll/16ths scaling), the Vertical Block Counter is reset and the Channel Counter is incremented by 2. When the Channel Counter rolls over from 6 to 0, the Column Counter is incremented. In this fashion one vertical scan is stored in the Input Buffer (3-1).
  • the scale factor e.g., when the Vertical Block Counter reaches 11 for ll/16ths scaling
  • the Column Counter will roll over and the Horizontal Block Counter is incremented. Once the image is completely input, the Horizontal Block Counter will indicate the width of the scaled image in units of 8 pixel blocks.
  • Fig. 5 To understand the control of the counters during compression, use Fig. 5 for reference.
  • an image has been stored in the Input Buffer during image capture.
  • the valid portion of the image lies between Horizontal Block (HBLK) #0 and HBLK #120 (as determined by the width count) and between channel 5, Vertical Block (VBLK) #1 and channel 0, VBLK #9.
  • HBLK Horizontal Block
  • VBLK Vertical Block
  • VBLK #9 Vertical Block
  • compression will begin at top left corner of the image and will proceed left to right, top to bottom. Compression direction can begin in any of the 4 corners of the captured image and counter control will be slightly different for each compression orientation.
  • the Channel Counter is preset to 5
  • the VBLK counter is preset to 1
  • the HBLK counter is preset to 120
  • the Column Counter is preset to 7
  • the Row Counter is preset to 7 (upper left-hand corner of image).
  • pixels must be raster scanned one pixel at a time, one block (64 pixels in 8X8 array) at a time. If the image is being compressed left to right, top to bottom (as it is in this example), all 64 pixels within the block must be read in the same direction before another block is read. Therefore, to read one 8X8 block, the Column Counter is continuously decremented (from 7 to 0) and every time the Column Counter rolls over (7 to 0) the Row Counter is decremented.
  • the HBLK counter is decremented (moving left to right) and 64 pixels are read from the next block.
  • the VBLK counter is decremented (moving top to bottom), and the HBLK counter is preset to the image width (left side or 120 in this example).
  • the Channel Counter is decremented, the VBLK counter is preset with one less than the scale factor (top of the channel or 10 in the case of ll/16ths scaling), the HBLK counter is preset to the width (left or 120 in this example) and the Column and Row counters are preset to 7 (top right corner of a block).
  • Top/Bottom Detection (e.g., see 3-3, Figs. 3,7): The top and bottom borders of an image are detected as the image is being delivered from the N/S hardware and stored in the Input Buffer 3-1.
  • the borders could have been found without any custom hardware by having the DSP scan the contents of the Input Buffer after the image has been loaded. This, however, would greatly increase the time required for the JPEG processing/compression path to process the image (e.g., not only would it take relatively long for the DSP to sample the contents of the Input Buffer, but this sampling could not be done till AFTER the image was stored in the Input Buffer) .
  • the borders are available to the DSP as soon as the image has been input— thus saving time.
  • the top and bottom borders detected by the hardware are relative to the scan direction of data received from the N/S hardware.
  • scans are assumed to be "pixel-wide" vertical strips spanning all 8 channels starting at the bottom block of channel 0 and proceeding to the top block of channel 7.
  • the bottom border of the image is the lowest occurrence of image data relative to the scan direction. This may actually be the top of the physical image, depending on. which camera is capturing the image (front or back) and how the check was loaded in the track (e.g. upside down?)
  • the resolution of the border detection circuitry is in units of blocks, not pixels. Therefore, borders are reported according to the channel and block row within the channel.
  • the top border is set at the extreme BOTTOM (CH 0, block 0) and the bottom is set at the extreme top (CH 7, block A).
  • the JPEG compression path hardware compares each pixel to a given "border" threshold value. Pixels "whiter" than the threshold are considered part of the check image, while pixels “darker” than the threshold are considered track background.
  • the hardware declares a "transition".
  • the first transition seen in a scan is considered the “bottom” of that scan while the last transition in a scan is considered to be the "top”.
  • the top and bottom borders of the latest scan are sent to a Top and Bottom Port. Only if the latest scan's top border is "higher” than the value presently existing in the Top port will the Top port be updated to the new value. Likewise, only if the latest scan's bottom border is "lower” than the value presently existing in the Bottom port will the Bottom port be updated.
  • the Top port will contain the channel and block number of the "highest” point of the image and the Bottom port will contain the channel and block number of the "lowest” point of the image.
  • the reason the hardware requires five (5) CONSECUTIVE image pixels before declaring a transition is to prevent the top/bottom circuitry from erroneously declaring a border based on dust or scrap (noise) in the track that might exceed the value of the selected threshold for a few consecutive pixels. Selection of five has been determined experimentally.
  • this concept of border detection may seem straight-forward, it is complicated by the fact that the vertical scans are not delivered to the H/C by a single pixel stream sequentially from the bottom to the top of the scan. Rather, data is delivered simultaneously from two pixel streams representing data from odd/even channels (0, 2, 4 and 6 and channels 1, 3, 5 and 7), respectively.
  • the hardware implements two independent transition detectors TD-1, TD-2 (one for each N/S pixel stream; see Fig. 7; Fig. 7 illustrates top/bottom detection from two serial pixel streams).
  • the hardware When transitions are detected, the hardware considers which of the two detectors found the transition, the value of the Channel Counter and the value of the Vertical Block Counter (see the Transpose and Block description) to see where, in the scan, the transition took place. Based on this information, the appropriate Top or Bottom port can be updated.
  • the top and bottom borders are detected by the JPEG processing/compression path hardware, but the right (leading) edge and left (trailing) edge are detected upstream by the N/S hardware.
  • the N/S hardware subsequently, only "qualifies" the data between the lead and trail edges when it delivers pixel data to the JPEG processing/compression (P/C) paths.
  • the N/S hardware will only qualify data in groups of 8 vertical scans. In other words, the N/S hardware frames the image in the horizontal dimension on block boundaries. Therefore, the only action the JPEG processing/compression hardware need perform to detect the width of the image (in block units) is to count every eighth qualified scan from the N/S unit. By doing this, the JPEG processing/compression hardware is able to furnish the horizontal dimension of the image to the DSP as soon as the image is input from the N/S.
  • the N/S hardware When the N/S hardware sends image data to the JPEG P/C path, it appends a burst of image-related "status" information on the "back porch" of the image transfer.
  • the JPEG P/C path must take this information, supplement it with "status data" pertinent to the P/C hardware and attach it to the now Compressed Image Packet that is to be sent to the rest of the check imaging system via the JCDB (output of Fig. 3).
  • the JPEG P/C hardware accomplishes this function by buffering the "status data" as it is received from the
  • the normalized and scaled image received from the N/S unit, and stored in Input Buffer 3-15 (Fig. 3), is processed prior to compression.
  • the image processing algorithms are executed by the DSP, based on the image dimensions and histogram generated by the hardware as the image was being input from the N/S.
  • the image could be updated by having the DSP update every pixel in the Input Buffer to its new image processed value.
  • Such a method would take a relatively long time, since the DSP would have to fetch, update and write every pixel in the Input Buffer.
  • a much faster method is preferred: i.e., to remap pixels "on the fly" as they are sequentially sent to the compression hardware.
  • the DSP generates a 128-entry "remap table" (one for every pixel grey level).
  • every pixel (7 bits) pulled from the Input Buffer addresses the 128 entry table (implemented in high speed RAM at 3-13) and the output from the table is the remapped, image processed value (8 bits) for that pixel.
  • This "remapped pixel" (value) is then sent to the compression hardware (i.e., to 3-17, and beyond).
  • LSI JPEG Compression see 3-17, Figure 3:
  • the JPEG defined Discrete Cosine Transform (DCT), quantization and Huffman coding is implemented in a two-chip set from LSI LOGIC CORPORATION. This chip set requires that image pixels be delivered in 8x8 blocks.
  • the retrieval of image data from Input Buffer 3-1 in the proper (8x8) sequence is effected by the Transpose and Block logic 3-11.
  • the pixels are remapped (image processed) "on the fly” (at unit 3-13) before being sent to the LSI chip set (i.e. to block 3-17).
  • the design of the chip set by LSI LOGIC Corp. is such that the time required to compress an image can be determined by the amount of pixels to be compressed, as well as by the speed at which compression is executed. Even in the (statistically rare) case where JPEG compression results in EXPANSION, the LSI JPEG chip set will take no longer to execute. This feature is critical to performance in a high speed check imaging system like the indicated DP1800.
  • the operating clock frequency of the JPEG compression path is preferably increased from 20 MHz to 25 MHz during compression.
  • Packet Count (see 3-21, Figure 3):
  • the JPEG P/C hardware need not save the results of "first pass compression". Therefore, many of the functions executed by Post Processor Unit 3-19 (e.g., byte stuffing, restart code insertion, writing to Output Buffer) are not done during "first pass compression". What is needed from first pass compression is merely an estimate of the result it yields.
  • the (32-bit) words output from the LSI LOGIC chip set 3-17 during compression are counted by the Packet Counter 3-21. The only inaccuracy in this count is that it does not include any "extra data” like stuffed bytes, restart codes or padded bytes that would be added to the compressed image by the Post Processor 3-19 (see Fig. 3).
  • JPEG Post Processor See 3-19 and 3-17A Fig. 3:
  • the LSI JPEG chip set outputs 32-bit words of
  • entropy coded data during compression. This data does not include the JPEG required "stuffed bytes” (a byte of 0x00 must follow every byte of OxFF entropy coded data), or the JPEG required "restart codes” (two bytes of OxFF and OxDy after each horizontal row of blocks has been compressed) or the “padding bytes” (OxFF) required by the application to align restart codes on 32-bit boundaries.
  • the JPEG compression hardware must take the 32-bit output from the LSI chip set, insert the necessary data described above and funnel this information down into the 16-bit-wide Output Buffer 3-25. To accomplish this, the JPEG Post Processor 3-19, must take one byte at a time from the 32-bit LSI chip set output, check to see if the byte equals OxFF, send this data to the Output Buffer and also send another byte of 0x00 if the previous byte was OxFF. After compressing each horizontal row of blocks, the Post Processor will insert the two-byte restart marker and the number of padding bytes required to align restart markers on 32-bit boundaries. JPEG standards require the restart marker to have a MOD 8 count component; and this is provided by the Post Processor hardware.
  • the Post Processor On average, it takes one clock cycle for the Post Processor to process each byte of data from the 32-bit (4 byte) LSI chip set ... two clock cycles if a 0x00 byte needs to be stuffed. While compressing "busy" portions of the image, it is possible (worst case) for the LSI chip set to output a 32-bit word 3 times for every four clocks. The Post Processor cannot keep pace with the LSI chip set during these "busy" portions of the image. To mitigate such "special circumstances", the JPEG compression hardware preferably also provides a IK x 32 FIFO buffer 3-17A between the LSI chip set output and the Post Processor Logic (i.e., between 3-17 and 3-19).
  • This buffer 3-17A allows the LSI output data to be buffered, and not lost, while the Post Processor catches up during "un-busy” portions of the image. Since the Post Processor only operates during "second pass compression" (when the DSP has already selected a target QM, and when compression is more predictable), the probability of "busy" portions of image occurring is greatly reduced. Therefore, any FIFO buffer, let alone a IK FIFO, would rarely be used. In the statistically-rare case where the FIFO (3-17A) is being heavily used, the input to the LSI chip set will be throttled by the JPEG compression hardware when the FIFO (3-17A) reaches "half-full” (512 words waiting for Post processing).
  • Restart Marker FIFO (3-23, Figure 3):
  • the JPEG P/C hardware generates and loads the compressed (entropy-coded) data into Output Buffer 3-25.
  • the DSP builds the required header information around the entropy coded data packet.
  • One of the required elements of the header is a table of offsets that will point to the END of each entropy coded segment in the Output Buffer.
  • An entropy coded segment from the JPEG P/C hardware is comprised of the compressed data from one horizontal row of 8x8 blocks, ending with a restart marker. Providing this table allows the compressed image to be "gracefully" truncated by software on entropy coded segment boundaries.
  • the DSP gets this "offset" information from the Restart Marker FIFO 3-23.
  • the Post Processor 3-19 must insert restart markers into the Output Buffer at the end of each entropy coded segment.
  • the Post Processor loads a restart marker into the Output Buffer, the address into the Output Buffer at that particular instant is loaded into the Restart Marker FIFO.
  • the 64K x 16 Output Buffer 3-25 is the physical location where the Compressed Image Packet is built by the JPEG compression hardware (entropy coded data) and by the DSP (header information). During second pass compression the entropy coded data is loaded into the Output Buffer. After second pass compression is complete, the DSP builds the remaining header information around the entropy coded data. Once this is complete, the Image Packet is ready for transfer to the next stage (see JCDB, or "JPEG compressed Data Buffer, Fig. 3) of the high speed check imaging system; and then the JPEG compression path is free to accept another image into its Input Buffer 3-1.
  • JPEG compression hardware entropy coded data
  • DSP head information
  • This "dual buffering" (Input Buffer and Output Buffer) so implemented in my JPEG processing/compression path according to this feature, enhances performance by allowing a second image to be received by a JPEG compression path before the first Compressed Image Packet is sent to the JCDB.
  • Sealer Invention fFigs. l'-5'.: Unlike prior arrangements for scaling image data, the subject "Mapping Sealer” is not microprocessor-based and performs a “mapping” (e.g., rather than simply implementing a scaling algorithm; e.g., like the ASIC of US 5,305,398)—and so can do “asymmetric scaling", can store multiple algorithms while selecting one as needed (while, by contrast, 5,305,398 can run only one set scaling algorithm) .
  • a “mapping” e.g., rather than simply implementing a scaling algorithm; e.g., like the ASIC of US 5,305,398)—and so can do “asymmetric scaling”
  • can store multiple algorithms while selecting one as needed while, by contrast, 5,305,398 can run only one set scaling algorithm
  • this "Mapping Sealer” provides more flexibility in storing, selecting, and running new and different scaling algorithms; new scaling algorithms, not currently stored, can be easily loaded by simply changing the circuit ROMs with no hardware changes required; this sealer can implement asymmetric scaling; and it is not microprocessor based.
  • Image channel data will be understood as a collection of scan lines. Each scan line (e.g., see Fig. 4') makes up 1/8 of the total track image (top to bottom) . If you place scanlines next to one another you will get a strip of image that is 1/8 of a track high and that extends lengthwise (e.g., to infinity).
  • Each scan line is 128 pixels tall. Image data is fed into the sealers sequentially, starting with pixel 0 of the first scan line and ending with pixel 127. After pixel
  • the sequence begins again with pixel 0 of the next scan line, and so on.
  • the input data to the sealers therefore has a pattern like the following: i Scan line 1 > ! ⁇ — 5ca ⁇ line 2 ! p0,pl,p2,p3, pl23,pl24,pl25,pl26,pl27 ,p0,pl ,p2,p3, . . . .pl27
  • Resolution for such input data may be assumed as 200 pixels/inch of document. As it turns out, this resolution is too high, so one needs a means of reducing resolution without degrading image quality.
  • the subject mapping sealer circuit (Fig. 1') performs a two-dimensional mapping of adjacent pixel values (document height) and adjacent scanline values (document length) to "new values” and adds valid/invalid markers. This "mapping" can reduce the image resolution from 200 dpi (dots per inch) to some lesser value.
  • the sealer circuits preferably accomplish the mapping using "ROM Look-Up tables" (Sealer ROMs so coupled), that are arranged preferably in a pipeline architecture.
  • the ROM-based design means that all scaling calculations are performed ahead of time by the algorithm designer. Then the algorithm results are programmed into the sealer circuit ROMs (see R-l,R-2, R-3 in Fig. 2'; R-4, -4', -5 in Fig. 3'). The sealer circuits do not execute algorithmic calculations. The "new values" and valid/invalid markers come from the scaling algorithm results that are stored in the sealer ROMs. The sealer circuits "map" input values into the output algorithm results.
  • any adjacent pixel/adjacent scanline scaling algorithm that can be designed can be implemented in this design.
  • the scaling design according to this implemented feature can store eight (8) different scaling algorithms. Each algorithm is easily selected from the NS board's Command Register CR via a "scale factor" input (e.g., see Control Register CR in Fig. 5'). Of course, using larger ROMs would allow for more stored algorithms.
  • pipelined architecture means that, once the pipeline is full, the entire two-dimensional scaling function occurs in one clock cycle (tic). Therefore, scaling can occur in "real time” (at a desired 8 MHz rate). Actual Scaling (Mapping!:
  • the sealer circuits implement the scaling mapping in a pipeline fashion.
  • the first Sealer stage S-I (Figs. l',2') implements "adjacent pixel mapping" (document height).
  • a "current" (7-bit) pixel value goes (from INPUT to R-1) to the pixel ROM address.
  • the "previous” pixel value goes to the previous pixel ROM address R-2.
  • each ROM address gets a pixel count number p-c and associated scale factor.
  • the pixel count number p-c keeps track of which pixel one is working on. Pixel count is reset at the end of every scan by an "end of scan" (e-os) signal.
  • a "scale factor" input selects which of the eight 8 possible scaling algorithm results is active.
  • the ROM address contains the pixel value, pixel number, and scaling algorithm index. These three input values point (map) to a unique output data value and a valid "marker", depending upon the scaling algorithm (used to separate the result file).
  • the data output from each ROM is then fed to the address of an Adder ROM (R-3) that performs the addition-mapping. (Or whatever other mapping that may be programmed into it. )
  • stage one S-I, Figs. l',2'
  • pl27 and pO aze mapped to — -p ⁇ ' and declared 'valid' oz "invalid' pO and pi aze mapped to — -pi' and declared 'valid' or "invalid"
  • pl26 and pl27 are mapped to — pl27' and declared valid or invalid.
  • the output data from stage one may now look like the following (possible 100 dpi scaling; notice the new values and valid/invalid markers "v,i"):
  • the "current" scan line pixel value is stored in a "Scan Line ROM” R-4, along with scale factor and scan line count.
  • the previous scan line pixel value is stored in a "Previous Scan Line ROM"
  • pO' Scanl and pO' Scan 2 are mapped to pO'' and marked valid or not.
  • pi' Scanl and pi' Scan 2 are mapped to pi'' and marked valid or not.
  • pl27' SCI and pl27' SC2 are mapped to pl27'' and marked valid or not.
  • the output of the scaling circuits (S-OUT, Fig. 1') is a two-dimensional, scaled-down version of the input (S-IN).
  • the scaled values and "valid" labels are dependent upon the scaling algorithm results stored in the sealer-ROMs. Pixels marked with "i” are invalid and are filtered-out of the data stream.
  • the described Sealer circuits preferably use a 4- bit pixel and 4-bit scanline counter (e.g., see counter PC in Figs. 1' , 2' ) .
  • Counter PC keeps track of pixel and scanline location within a 16-position window. Therefore, the sealers can implement any scaling algorithm that scales down the image by a multiple of 16. Possible scaling selections are 1/16, 2/16, 3/16, 4/16, 5/16, 6/16, 7/16, 8/16, 9/16, 10/16, 11/16, 12/16, 13/16, 14/16, 15/16, 16/16.
  • the Sealer circuits preferably use a 3-bit "scale factor selection” code.
  • This code allows 8 different scaling algorithms to be stored in the sealer ROMS simultaneously. Which scaling algorithm is run is determined by the 3-bit scale factor selection code. This code is accessed via the command register of the NS board.
  • This selection code can activate different scaling factors such as 4/16 or 8/16; or it can activate different scaling algorithms such as 8/16 2X2 averaging or 8/16 bilinear interpolation, or whatever algorithm the designer wants to run; it is up to the algorithm designer and what he has programmed into the "Sealer ROMS" (e.g., see R-1,-2,-3,-4,- 4' ,-5 in Figs 1' , 2' ) .
  • adjacent pixel scaling (height) is handled independently of adjacent scanline scaling (length), it is possible to implement "asymmetric scaling". There is no reason why adjacent pixel scaling has to use the same scaling factor or scaling algorithm as adjacent scanline scaling.
  • adjacent pixel scaling (document height) might be 7/16 and adjacent scanline scaling (document length) might be 9/16. This particular example would have the visual effect of stretching the document length-wise. One can see how this option gives the algorithm designer extra flexibility in deciding how to scale documents.
  • S-II ROMs scanline ROM R-3, "previous scanline” ROM R-4 and “adder” ROM R-5) can also be replaced with one large ROM.
  • sealers allows for storage of eight separate scaling algorithm-mappings.
  • the design could use larger ROMs and so accommodate more scaling algorithm-mappings.
  • mapping sealer has several advantages (e.g., over the ASIC Sealer of US 5,305,398, with its microprocessor-based single fixed scaling algorithm): e.g., the "Mapping Sealer” can provide more flexibility in storing, selecting, and running, new and different scaling algoriths; scaling algorithms, not currently stored, can be easily loaded by simply changing the circuit ROMs, with no hardware change required; it can implement asymmetric scaling. Since these sealer circuits perform a mapping function rather than just implementing a scaling algorithm, any scaling algorithm that can be defined as a two- dimensional "adjacent pixel/adjacent scanline mapping" can be run; and asymmetric scaling can be implemented. [There is no reason why the scaling algorithm or scaling scale needs to be identical for adjacent pixel vs adjacent scanline scaling.]
  • NS sealers store 8 multiple algorithms (the instant NS sealers store 8) are stored and can be selected for use on an as- needed basis by a single instruction to the NS command register, (vs US 5,305,398 which could run only one scaling algorithm. )
  • mapping ROMs can be stored in the mapping ROMs, with these mappings selected by changing the ROM addressing via the scale factor input.
  • This current NS implementation stores 8 different scaling mappings.
  • the number of mappings stored is, of course, dependent on the size of the ROMs chosen. Larger ROMs would give space for more mappings.
  • sealer circuits can make use of dual port RAM for storage of a complete scanline for use in the adjacent scanline scaling. And since the sealer marks pixels as "valid" or "invalid", pixels marked invalid may be filtered-out of the data stream.
  • Document Processor DP schematically indicated in Fig. IB and including an image interface unit (board) IIB and associated processing software DP-S, fed by an Imaging Module IM, including a pair of Front-/Back image processing units A-l, A-5 to develop respective electronic, digital document image data, as aided by a Diagnostic/Transport Interface DTI.
  • This electronic image data is passed to a Main Processor A-7, and may be stored in a Storage-Retrieval unit SRM, being linked to processor A-7 via a Point-to-Point Optical Link unit A-9, as known in the art.
  • Electronic Camera and an Image Digitizer operate to process video scan lines, but that they perform no operations on a per-document basis, and therefore do not use "sync tags".
  • the "sync tags” are preferably arranged to originate with the Document Processor Software (e.g., see Fig. IB, element DP-S) as workers will appreciate.
  • Figure 2A illustrates a "flow" of "sync tag” information through the image processing electronics of a document processor (e.g., like that of Fig. IB, Imaging Module thereof) to a Storage and Retrieval Module, SRM.
  • the "sync tags” preferably originate within the document processor software, and are returned to the document processor software, providing an end-to end check of the integrity of image generation.
  • the "sync tag" for a document is preferably assigned by software executing in the document processor. That software produces information indicating the operations that the document processor/imaging module are to perform on the document as it travels to its assigned sort-pocket; this is the "Dispose Command". Part of this Dispose Command is the sync tag and image information.
  • the Dispose Command is transferred by the Image Interface IIB (Fig. IB) to the Imaging Module IM.
  • Salient units of image processing electronics are indicated in Fig. 2A, including image digitizer ID, front CAR Port C F (accepts Courtesy-Amount-Reader data, as known in the art), with a Buffer JCDB (JPEG Data Buffer, also see Figs.
  • IC and 2 fed by a Histogram/Compressor Stage H/C that is, in tern, by a pair of Normalizer/Scalers (Master N/S, M-S and Slave N/S, S-S).
  • Sync-tag data is fed to Sealer M-S (e.g., from Document Process Software, so DP-S, Fig. IB).
  • a preferred Normalizer/Scaler organization is indicated in Fig. 9 as a Sync-Tag FIFO register 5-1 coupled between an interface 5-5 to the H/C stage and an input (DTI) interface 5-3, with a "Last Sync- Tag register” 5-7 in parallel therewith. Fault registers 5-8 and Status registers 5-9 are also so coupled.
  • JPEG Compressed Data Buffer JCDB (e.g., see Figs. IC, 2A, 3 and 9) is indicated in Fig. 10 as a pair of Primary-, Redundant Memory buffers 6-3,6-2 coupled between H/C interface 5-5 (see Fig. 9) and Interface 6-4 to the Main Processor, with a Cross-Compare Stage 6-8 in parallel to Interface 6-4.
  • a Sync-Tag Queue Unit 6-6 and associated DT (Diagnostic Transport) Interface 6-7 (see Fig. IB) also input by H/C Interface 5-5.
  • a Diagnostic and Transport Interface receives such "disposition information" (e.g.. Fig. IB, as above) from the document processor, it extracts the sync tag information and passes the sync tag value to Sync Tag FIFOs in the Normalizer/Scaler (N/S) units (e.g., see Fig. 9) for the front and back image processing electronics. Then, this disposition information is passed to the Main Processor (A-7, Fig. IB).
  • DTI Diagnostic and Transport Interface
  • the DTI When the DTI receives an "interrupt" from the Normalizer/Scaler units, it begins a timeout for the item to complete compression. The DTI then reads the "Last Sync Tag Register" from both Normalizer/Scalers (e.g., see Fig. 9), and verifies that the sync tag that was read matches the sync-tag in the "Dispose Command". If either sync tag is "incorrect” (i.e., does not "match"), then the DTI requests the Main Processor to "Stop Flow" . When the DTI receives an interrupt from a JPEG Compressed Data Buffer (JCDB, Fig. 2A: described above), it reads the Sync Tag Queue for the interrupting JCDB.
  • JCDB JPEG Compressed Data Buffer
  • the DTI requests the Main Processor to "Stop Flow". If this is the second JCDB interrupt for this item (that is, if the interrupt from the JCDB from the other side for this item has been processed or has timed-out), then the "status" for this item is sent to the Main Processor A-7.
  • the Main Processor (A-7) compares the master N/S sync tag and the slave N/S sync tag in the JCDB memory buffer for the interrupting JCDB. If the sync tags do not match, then Processor A-7 uses the sync tags from the redundant JCDB memory buffer (see 6-2, Fig. 10) to determine if the fault lies in the JCDB memory buffer, or in the input data from the Histogram/Compressor (H/C bus, 6-5, Figs. 9,10 ). The "status" from the H/C within the JCDB memory buffer indicates if the H/C detected a mismatch in the sync tags as they were received from the N/S boards.
  • Main Processor A-7 compares the master N/S sync tags from the front and back JCDB buffers with the sync tag in the next queued "disposition information" and the sync tag in the "status" bits from the DTI, to verify that the sync tags from all four sources match.
  • the main processor also transmits the "sync-tag" to the Document Processor Software (DPS, Fig. IB) when processing is complete.
  • FIG. 2A illustrates sync tag "flow" through image processing electronics (for handling the image of one side (assume Front side) of a document. Note: The front and back sides of the document are processed by like, separate sets of electronics. The "intermediate” and “final” sync tags produced are examined by the programs executing in the Imaging Module, to verify that the sync tags remain in sequence for a particular side, and that they match between the two sides.
  • the Normalizer/Scaler then stores the sync tag value in the "Last Sync Tag Register" (e.g.. Fig. 9), and then assembles and transfers the sync tag and status data for the document to the Histogram/Compressor and interrupts the DTI, and removes the entry at the head of the Sync Tag FIFO.
  • the transfer of "status" data (conventionally developed as workers realize) from the Normalizer/Scaler to the Histogram/Compressor array H/C follows the final scan line of an image, using the same bus as the image data (e.g., illustrated exemplarily in Fig. 8).
  • a "document present" signal PDOCPRES_N
  • PVALID_N "valid video” signal
  • the least significant byte of the sync tag from each Normalizer/Scaler board is transferred over a "processed video” (PVIDEO) bus (e.g., see Fig. 2A) .
  • the most significant byte of the sync tag from each Normalizer/Scaler board is transferred over the PVIDEO bus.
  • one byte value may be transferred over the PVIDEO bus to the Histogram/Compressors by each Normalizer/Scaler. Multiple-byte information is transferred, least significant byte first.
  • the Histogram/Compressor includes the sync tags and "status" bits received from the Normalizer/Scaler boards in its "compressed image buffer" (see Output Buffer, Fig. 3)—these bits are transferred to the JPEG Compressed Data Buffer. (e.g., see Fig. 2A) .
  • the Histogram/Compressor compares the sync tag bits received from the Master Normalizer/Scaler board with those received from the Slave Normalizer/Scaler board, and a "fault" is declared if they are "unequal” .
  • This fault data is also included in the image data that is transferred from the compressed image buffer to the JPEG Compressed Data Buffer.
  • the JPEG Compressed Data Buffer (Fig. 2A) extracts the sync tags from the Master N/S as data is received from one of the Histogram/Compressors . This sync tag is placed in a queue which can be read by the DTI . (When an entry is “read”, it is removed from the queue.) An “interrupt” in presented to the DTI whenever this queue is not “empty” .
  • sync tags can be quite advantageous; e.g., when made available to the DTI at the Normalizer/Scaler and JCDB boards to verify the integrity of the DTI's image processing electronics and its internally maintained queues, since each entry in a queue also contains the sync tag remembering that the sync- tags facilitate rapid, reliable detection of a malfunction which throws the Front/Rear image-bits out of sync.
  • sync tags may be used during debugging to easily identify the various pieces of image data and other, collateral, data associated with a particular document image. Having the sync tag embedded into image data (e.g., like "status”) allows this fundamental information to be easily correlated, whereas the "old" way gives no such convenient identifier/synchronizer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Signal Processing (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An electronic document-imaging arrangement which generates imaging-bits representing a given document and transfers these bits on a 'per-document basis' to various successive electronic processing stages and, finally, to a data base storage means (SRM); this arrangement also including a tag stage to create tag bits unique for each such imaged document and transfer these tag bits with the imaging bits for each document to each such processing stage that handles the imaging bits, and finally to an SRM interface for final matching and removal of the tag bits.

Description

Title AUTOMATIC CHECK HANDLING, USING SYNC TAGS
FIELD OF THE INVENTION
This invention relates to the automatic handling of checks and like documents and, more particularly, to a method and apparatus for modified reproducing of such while using "sync tags".
BACKGROUND OF THE INVENTION
Workers familiar with automatically handling and processing financial documents (e.g., sorting checks and
deriving information therefrom) will, at times, want to lift data (e.g., image portions) therefrom. Such data can be automatically, electronically processed and rearranged. So doing is a general object of this case.
In such processing, they, at times, create an electronic digital image of each document as it passes an imaging station (e.g., as simplistically illustrated in Fig. ID). In Fig. l.D there is shown a financial document sorting system having a typical document sorter 12, which in the preferred embodiment of this invention, comprises a model DP1800 sorter which is manufactured by the UNISYS Corporation of Blue Bell, Pennsylvania.
Sorter 12 contains a track 14 along which a plurality of financial documents 16 (e.g., checks) passes. Sorter 12 includes a magnetic character reader 18 and magnetic strip character controller 20, as well as a document holder 22 and a pipelined image processor (imaging station) 24.
Controller 20 is coupled to reader 18 via signals on a bus 26, to a host computer 28 by signals on a bus 30, and to the pipelined image processor 24 by signals on a bus 32. A computer 28 is coupled to an image storage module 34 by signals on a bus 36, while image storage module 34 is also coupled to the pipelined image processor 24 and to a plurality of workstations 38 via signals on a buses 40 and 42, respectively.
In operation, documents 16 sequentially pass reader 18 which reads a typical code appearing upon the usual MICR codeline strip which is normally placed upon each of the documents 16. The code read-out is then sent to computer 28 by signals on bus 30 for storage therein, and also to processor 24 by signals on bus 32. As each document 16 further proceeds, it passes imaging station 24 which creates a digital electronic image of the document, and sends this processed image data, via signals on bus 40, to image storage module 34 for storage therein. After passing station 24, each document is then sorted, by sorter 12, in the usual way (based on the contents of the MICR codeline) and is held at document holder 22.
After a typical prescribed block of such documents 16 has been sorted as aforedescribed, workstations 38, via signals on bus 42, may sequentially request document image data from storage module 34. This image data is then downloaded to a workstation 38, via signals on bus 42, along with associated magnetic code data obtained from host computer 28.
After such image data is so captured at a workstation 38, an operator may electronically enter the dollar amount (e.g., courtesy amount) on each document and electronically resolve any associated inconsistencies. Each image's dollar amount and associated corrections then form a single record which is sent to computer 28, via signals on bus 42, where it may later be accessed for use in automatically inscribing the dollar amount and corrections upon the document. Therefore, the aforementioned document sort system 10 substantially eliminates manual handling of an individual document 16, once its associated dollar amount is so verified and inscribed, to thereby increase the efficiency, speed and timeliness of the overall document sorting system 10.
Within Image Processor 24 in Fig. ID is placed one of "n" JPEG Processing/Compression stages (24-A). Two of these JPEG Processing/Compression paths are implemented on a Histogram/Compressor printed circuit board assembly
(PCBA) shown in Fig. lC. Compression, etc. :
In one type of document processor, front and rear images of a document are captured, enhanced, and compressed by two independent mechanisms. Following compression, the front and rear images are combined with additional information specific to the document (previously received from the document processor), and stored in a database, separate from the document processor, for subsequent retrieval.
A hardware/software failure can cause the electronic images in the (front and rear image) processing stages to become unsynchronized; e.g.. Typically, because one or more images are skipped on one side; or, the front and rear image bits may be synchronized with one another, but may not be synchronized with ancillary document information ('collateral document data" that is associated with the image bits). If this condition goes undetected, then the front and/or rear image bits will not be stored with the proper document record in the database.
Thus, it will be understood as useful to have a method of identifying (both set of) image bits, especially where the image data is to be reliably co-identified with collateral document data. Such "identification-bits" should be carried with the image data through the various stages of processing, so the image's identity can be maintained at each processing station, and can be transferred to a downstream processing station. Such identification-bits should be available at the point where the front and rear image data, and "collateral document data" are merged to ensure that a full, correct data set is being combined for transfer to the database. Thus, a salient object hereof is to so identify image data; especially with "sync bits" as detailed below.
Artisans will sometimes pre-process such data
(bits) and then to compress it. Another object hereof is to preprocess such data and then compress it. A related object is to do so for imaging data which is presented in a multi-bit data stream, for compressing and reducing the bits, and sending the results to utilization means, preferably in an arrangement which comprises a preprocessing (buffer) stage for digitizing and scaling, then presenting the data stream in two parallel bit- streams, plus a compression stage providing two parallel like compression paths for the in/out streams via a prescribed first compression-processing and then via a prescribed second compression-processing to provide a prescribed time-compressed output to the utilization means. A related general object is to execute such compression in "real time" and according to JPEG standards. Another general object is to yield better compression of a data stream, along with normalizing and scaling it, while using "sync tag" identifiers.
BRIEF DESCRIPTION OF THE DRAWINGS For a more complete understanding of the present invention, and advantages thereof, reference may be made to the following detailed description taken in conjunction with the accompanying drawings, in which:
Fig. ID is a generalized block diagram of a typical document processing (sorting) system, while Fig. IB is a block diagram indicating the transfer of "sync-tag" data through such a document processing system; while Fig. 1C is a like showing of a dual-path Histogram/Compressor unit thereof;
Fig. 2A is a block diagram of image processing portions of such a system indicating exemplary use (flow) of "sync tag" data according to this invention, while Fig. 2 illustrates the functions of Digitizing, Normalize/Scaling (N/S) and Processing/Compression of an exemplary document; Fig. 3 is a block diagram illustrating a single Processing/Compression path in such a system;
Fig. 1A is a very general block diagram of part of a document sorter system, while Fig. 1 is a block diagram of a document imaging module embodiment for such a system;
Fig. 4 is a diagram of eight related processing paths;
Fig. 5 is an item-related diagram of a subdivided Input Buffer for the array in Fig. 3, while Figs. 6 (6A-6D) indicate how the item in Fig. 5 is preferably addressed and processed;
Fig. 7 similarly illustrates how the top/bottom of the item are detected for such processing in two four- channel streams;
Figs. l'-5' depict a related sealer embodiment; and Figs. 8-10 relate to the "sync tag", i.e.:
Fig. 8 is a plot of related exemplary processing signals and video/status data;
Fig. 9 details the makeup of a preferred Normalizer/Scaler Unit; and
Fig. 10 gives the makeup of a preferred JPEG Compressed Data Buffer.
DETAILED DESCRIPTION System Overview:
Figure 1A will be understood to depict part of a
Financial Item Sorting System 10 having a typical Item Sorter (e.g., in the preferred embodiment of this invention, a Model DP 1800 sorter which is manufactured by the Unisys Corporation of Blue Bell, PA).
Sorter 12 contains a Track for transport of checks or like Financial Items to be processed, e.g., by a typical reader and related controller. Additionally, the Sorter contains a Camera Assembly, and an Image Processor 24 apt for use in accordance with the teachings of the preferred embodiment of this invention.
Controller 20 is coupled to the Reader and to a Host Computer 28, as well as to Image Processor 24, as known in the art. An Image Storage Module 34 is also coupled to the Image Processor 24 and to a plurality of Workstations ws. The Camera Assembly is coupled to the Image Processor 24.
In operation, checks or like Items sequentially pass the Reader which can read a typical magnetic code thereon. The code read-out is then sent to Computer 28, for storage therein and to Image Processor 24. As each Item travels along the Track, it passes the Camera System which captures an image thereof and outputs a digital representation of the image to the Image Processor 24. This digital representation comprises a plurality of image pixels having an intensity which can be represented by an "intensity-number" between 0 and 255. Image Processor 24 then processes this digitized image and sends the associated signals to the Image Storage Module 34 for storage therein. After this, the documents are then sorted and stored in the usual way.
After a typical block of checks has been processed in this manner, an operator at one of the Workstations may request the aforementioned image bits from Storage Module 34. These bits are then downloaded to his Workstation along with their associated identifying magnetic code data obtained from Host Computer 28.
After these bit sets of (images) are captured by a Workstation, an operator may electronically enter associated data (e.g., check dollar-amount) with a given document-image and electronically resolve any difficulties; (e.g., an error in reading the magnetic code thereon) entering and storing the needed correction for the image. Each digitized image and its associated data and corrections then form a single corrected, completed electronic computerized record which can then be sent to Storage Module 34. Thereafter, it may be accessed for use in automatically inscribing associated data (e.g., dollar amount, corrections) upon the stored Items. In this way, item Sorting System 10 substantially eliminates the manual handling of Items 16 when an associated dollar amount is placed thereon, thereby increasing the efficiency and timeliness of sorting and processing.
Compression Stages;
Within Image Processor 24 is placed one of "n" JPEG processing/compression Stages according to a feature hereof. Two of these JPEG processing/compression paths are preferably implemented on a Histogram/Compression printed circuit board assembly (PCBA) as shown in Fig. 1C.
Image Processor 24 of Fig. 1A is preferably characterized by the following: an Image Digitizer Unit (D of Fig. 2) for analog to digital conversion of the captured image, a Normalizer/Scaler (N/S Unit Fig. 2) for normalization and scaling of the video image, a set of "n", parallel JPEG Processing/Compression Units (Jl etc. of Fig. 2 and 24-A of Fig. 1A) for image processing/JPEG compression and a JPEG Compressed Data Buffer Unit (JCDB in Fig. 2) for collection and temporary storage of compressed images from the JPEG Processing/Compression Units. [Note "JPEG" refers to the compression standard by the "Joint Photographic Experts Group" . ]
These functions are implemented especially to meet the performance requirements of a high speed check process (imaging) system and to minimize the cost of the system by reducing the amount of "parallel hardware" needed to compress images. A preferred "dual" Processing/Compression Stage (for JPEG) is indicated in Fig. lC.
The JPEG compression hardware performs image processing on a 128 grey level, scaled image before executing a two-pass JPEG compression. Scaling is nominally at 137.5 dpi but can range from 137.5 dpi to 50 dpi in steps of 12.5 dpi. This two-pass compression is designed—according to this feature—to reduce images to a predictable "packet size" apt for use in the entire High- Speed Check Imaging System. These functions of the JPEG "P/C" (Processing/Compression) hardware, (detailed below) must be performed, here, in real time on check images as they move down a high-speed Check Sorter Track at an approximate rate of 1800 checks per minute.
It is not possible, within the environment of the high-speed Check Imaging System (detailed below) , for a single JPEG "P/C" (Processing/Compression) path to process every check in real time. Therefore, multiple JPEG "P/C" paths, operating in parallel, are needed (e.g., see Fig. 1-C). To reduce the time required for each processing/compression path to operate on an image (and therefore reduce the number of parallel paths needed to maintain system performance), many of the required functions of the JPEG "P/C" path have been implemented in hardware. The detailed explanation of each of these functions is described below.
System Environment;
A JPEG "P/C" (Process/Compression) path as here contemplated, will perform image processing and real time
JPEG compression of normalized and scaled images of documents (e.g. checks) captured in a Check Sorter at an average rate of 1800 checks per minute. The diagram in Figure 2 indicates conditions under which each JPEG "P/C" path preferably operates and the performance required of such a unit to maintain overall system performance.
Figure 2 shows the processing of a sample of check images as they move left to right across the page, simulating the way a check would move through a Check Sorter. (Here, track speed of sorter assumed to be 300 inches per second; this means that a check 6 inches long will take 20ms to pass a fixed point on the Sorter Track. ) Here, checks can range in length from 5.75 inches to 9 inches (19ms to 30ms), with inter-checks-gaps ranging from 1.5 inches (5ms) to several inches.
The check images are captured by camera means, here preferably comprised of a vertical, 102 -element CCD array which samples 256 grey levels per pixel (8 bits) with a resolution of 200 pixels per inch. In the vertical direction, the Camera can capture images up to 5.12 inches high. The 1024 element array takes a snapshot of the check every 16.66us as it moves down the Sorter Track, yielding a horizontal capture resolution of 200 pixels per inch. These 1024 pixel scans (captured every 16.66us by the CCD array) are divided into eight 128 pixel Channels (shown as CH 0 through CH 7 in Fig. 2, each composed of 128 pixel scans). Hardware in the Digitizer D converts each 128 pixel scan into eight serial streams of pixels, with one pixel being output approximately every 130ns.
The N/S (Normalizer/Scaler) hardware next normalizes the pixel values from the 1024 CCD elements and then scales the image down. The maximum resolution after scaling is 137.5 (ll/16ths scaling of 200 dpi captured image) pixels per inch in both dimensions (e.g., see example shown in Fig. 2—and see Figs. l'-5' described below) . In this example the 128 pixel scans in each Channel are reduced to 88 pixels per scan. The N/S hardware "time-multiplexes" four channels' worth of data onto two, 8-bit serial outputs to the JPEG "P/C" hardware. The 88 pixels from all four "even numbered" (total of 352 pixels per scan at 137.5 dpi) Channels (0, 2, 4, 6) are time-multiplexed along one serial stream, while the pixels from the four "odd" Channels (1, 3, 5 and 7) are multiplexed along a second serial stream. The two serial pixel streams operate at 50ns/pixel (20 MHz) to guarantee that all 352 pixels per scan on each serial path can be transferred to the JPEG "P/C" hardware before the next scan is transferred.
A pair of JPEG "P/C" paths are preferably implemented on an H/C PCB (Histogram/Compressor Printed Circuit Board, as indicated in Fig. IC). This H/C PCB must detect the image dimensions and perform image processing on the scaled image prior to compression. Selected image processing algorithms require a grey level histogram of the entire image prior to execution. This means the entire image must be buffered (e.g., at 3-1, Fig. 3) and a histogram generated (e.g., at 3-7) before image processing can begin. Once image processing is complete, compression can begin.
The performance of the entire image system is what dictates how the JPEG Processing/Compression hardware will reduce each image to a target packet size; this is why the here-detailed JPEG Compression hardware embodiment executes a 2-pass compression. The first pass uses a "standard" QM (Quantization Matrix) for JPEG compression. The results of the first pass compression, as well as the detected image dimensions, are used to pick a second QM for a second, final compression that will reduce the scaled image to the desired compressed packet size.
To maintain system performance, the JPEG Processing/Compression hardware must perform all these functions in real time which equates to generating a JPEG compression packet in 20ms for a 6-inch check. Because a single JPEG "P/C" path cannot meet these requirements, multiple paths operating in parallel are required. The described H/C PCB was equipped with two independent JPEG "P/C" paths for this purpose (see Figs. 3, IC; preferably, the system has locations for up to 4 H/C PCBs, for Front/Rear imaging, this means the system can have as many as 8 JPEG compression paths operating in parallel; e.g., two for each H/C PCB—i.e., a pair on each side). For example. Fig. 4 indicates how two JPEG processing/compression paths are implemented on one H/C PCBA and how up to 4 H/Cs can be used on each side (front and back) of the imaging system.
Image Module. IM; Fig. 1: The JPEG Image Module (IM) is a device that in conjunction with a document processor allows image data from the front and rear of documents to be collected, compressed, packaged and (finally) transmitted to a mass storage device. The document processor is a device which is capable of transporting documents down a track, reading data encoded on the documents and providing that data to an application program running on a host computer and (finally) sorting documents to bins based upon that information. The IM provides outputs to alternative recognition devices such as a Courtesy Amount Reader (CAR) unit (1-23, Fig. 1) and mass storage devices such as a Storage Retrieval Module (SRM: 1-20, Fig. 1). Images can be retrieved from the SRM for data entry, or data correction. The new, augmented or corrected data is then passed on to the application mainframe. Data read by the CAR unit is also returned to the application mainframe. In this way the document data base can be more economically updated and corrected for use by the application program.
Fig. 1 is a block diagram of major functional portions of such an imaging module IM, apt for use with this invention. Preferably, module IM is used, as a printed circuit unit (board) for imaging/processing of images from one side of passing documents, with a second, like board (not shown) used for the other (Rear) side. Module IM produces compressed 8-bit gray level (electronic) images (image data) at resolutions of about 100 pixels per inch. Image data is compressed using a "JPEG Baseline Sequential technique. JPEG is a family of gray scale image compression techniques adopted by the International Standards Organization (ISO) Joint Photographic Expert Group (JPEG) subcommittee, as workers realize. The compressed images are encapsulated in a Tagged Image File Format format (TIFF). TIFF is an industry standard format for the encapsulation of image data, as workers realize.
Salient Functional Units:
In Fig. 1, it will be evident that an electronic camera Unit 1-1 captures successive images (e.g., of passing checks or like financial documents, front and rear, suitably illuminated; e.g., lamp illumination controlled by Unit 1-3). The camera uses a segmented linear "Charge Coupled Photo Diode" (CCPD) array ("segmented" meaning that the linear array of photo-diodes is subdivided into 8 segments or channels, each of which has its own output, thus effectively multiplying by the number of subdivisions the rate at which image data can be clocked-out—here the channels are numbered 0 through 7) . The electronic image from each channel is digitized by an Image Digitizer 1-5 and presented, in parallel, to a pair of like Normalizer/Scaler (N/S) units; a master N/S 1-7 and a slave N/S 1-9 (e.g., on separate boards, each for identically processing four video data channels—e.g., four "even" numbered channels on Master, four odd on Slave). These channels are merged into a single output channel for compression (as further detailed below). Their output, the scaled-normalized image signal path, is indicated in Fig. 1. The transfers of data between the Digitizer, N/S, and other functions are more diagrammatically illustrated in Fig. 2, while processing/compression is further, similarly illustrated in Fig. 3. An additional output of normalized (unsealed) image data comes from each N/S board to provide image data to a CAR system 1-22, 1-21, 1-23. The output of the two N/S boards is presented to a Histogram/Compressor (HC) array preferably comprising from two to four like units (boards) 1-11A, 1-11B, 1-llC and 1-11D operating in parallel. The number of boards depends on required throughput (in documents per second) as determined by the document processors throughput and the type of error detection required. Each unit 1-11 processes and compresses image data from N/S units 1-7, 1-9 and implements two like independent data paths, each of which can process an entire single image.
The two paths on the H/C board can independently process and compress separate images or can compress synchronously the same image and compare compression result to detect f ilures. The sequencing of image data into, and compressed data from, the H/Cs is controlled by a state machine in the JPEG. Compressed Data Buffer (JCDB) 1-15. When a HC's path's compression is complete it signals the JCDB. When the JCDB is ready to accept data from that path, it signals that path to output data for storage on the JCDB. A signal PDOCPRES_N which is activated when the N/S detects that a document is in front of the camera is applied from the N/S units to JCDB 1-15, whose state machine uses it to sequentially allocate images to H/C paths.
Buffer JCDB 1-15 is coupled to a Main Processor (MP) 1-16 which packages the JPEG compressed image data into the TIFF format employed by the system.
Further on Module IM. Fig. 1: The compressed data is transferred from the MP 1-16 to the Point-to-point Optical Link (POL board 1-19 which is a fiber-optic interface) for transmission to the Storage and Retrieval Module (SRM) 1-20. The SRM stores the images and, under the direction of the application mainframe, manages the TIFF packaged compressed image files for later distribution to image workstations (not shown) for viewing, correction or entry of associated document data.
The Diagnostic Transport Interface (DTI) 1-17 is one end of the interface between the document processor and imaging module IM. The Image Interface Board (IIB) 1-18 resides in the document processor and sends document data and control signals rom the document processor to the DTI . The DTI receives this data and passes document data, across the Main Bus, to MP 1-16 for inclusion in the TIFF packet for the associated compressed image. The DTI also receives control signals from the document processor's tracking logic through the IIB 1-18 that indicate that a document is approaching the camera. These signals are passed on to N/S boards 1-7, 1-9 where they are used to prime the edge detection circuits to detect the leading and trailing edges of do< iments passing the camera (along the track). The Character Recognition (CAR) subsystem 1-21,1-22 and 1-23 subsystem consists of two circuit boards: (see CAR Preprocessor or Bypass 1-22, CAR Port 1-21 and the CAR Unit 1-23). Here, only the Front N/S boards are linked to CAR subsystem. Both boards perform specialized image processing functions on the normalized image data from the N/S boards, they improve the chances that the Character Recognition Unit (CAR Unit) 1-23 will successfully read information from the document. After these processing steps the processed image data is sent to the CAR unit for recognition of data. The results of reading the data are returned to the document processor for inclusion in the data files stored on the application software mainframe. The information read successfully from the documents can be used to correct the data files on the mainframe.
Image and Data Synchronization;
Module IM (Fig. 1) can be processing, compressing or storing images and be documenting data for as many as 25 images at any one time. It is important to make sure that the document's images and associated document data from the document processor remain in synchronization so documents are not misidentified. As documents begin to move down the track of the document processor each document is detected by tracking logic (see below) and assigned a tracking identification code ("sync tag"). The sync tags are assigned sequentially to each image (e.g., by Software in DTI) to identify it and prevent its loss, as well as help later in sorting. It preferably is triggered on detection of a check (document) leading edge and comprises a 16-bit identifier code (for image, -frame etc.) assigned in FIFO order; preferably supplied by the DTI unit.
Operation of the document processor is such that there are multiple documents moving through the track at any one time. The document processor tracking logic is used to determine the physical location of a document in the track. The document processor also has subsystems, such as Magnetic Ink Character Recognition (MICR), readers that may generate data that is subsequently associated with the document moving through that subsystem. Module IM receives the sync tags and document data and queues them up in the memory of Main Processor (MP) 1-16, in a first in first out (FIFO) fashion; likewise the sync tags are stored in a FIFO queue on the NS. When a document moving down the track approaches IM camera 1-1, the tracking logic senses it and a signal is sent to the IM to alert the NS to find the leading edge of the document. When the document is found it activates the PDOCPRES_N signal, thus alerting the H/C units to the forthcoming image data. When the trailing edge of the document is detected by the tracking logic it sends a signal to module IM to alert the NS to find this trailing edge. When the trailing edge is found, the document's sync tag (see Fig. 3A) is pulled from the queue in the N/S and attached to the "document status field" (which is appended to the trailing end of the related image data) .
The foregoing may be understood as applying to imaging of one side (e.g.. Front) of a passing document. Processing image data for the other side (e.g., rear of a document) proceeds in a similar manner—preferably with a separate independent camera, digitizer, NS, HC, and JCDB boards.
The sync tags are used in module IM in the following ways to assure that image data from a given document are kept "in sync" (i.e., in sequence, same for any given document) : — When the NS finishes processing an item, the
Diagnostic Transport Interface (DTI) checks to make sure that the sync tags from both the front and rear master NS are the expected "next tags" (in sequence) and that they are identical. — The H/Cs use the sync tags to assure that the image data from the master and slave NS units are identical, assuring that these boards are "in synchronization" (i.e., handling data from same document) . The HCs compress the image, then the sync tags are passed along with the compressed-image-data to the JCDBs, where they are queued-up (in FIFO fashion) . — The JCDB interrupts the MP when it has image data available. The MP reads the image data and status bits (which include the sync tag) from the JCDB, and checks to see that the sync tag from the image read from the JCDB matches the expected sync tag as stored in its own memory queue. — The sync tag is also fed to the "CAR
PREPROCESSOR OR BYPASS" 1-22 and Carport board 1-21 to keep the CAR unit synchronized with document processor flow.
Whenever a mismatch of sync tags is found, an error is declared and document processor flow is ordered to stop.
'Stop Flow* Requests;
According to a feature hereof, a "Stop Flow Request" (SFR) is generated by DTI,, JCDB, N/S, H/C or ID when they detect an "error condition": e.g., sync tag mismatch, or incorrectly matching document data with image data, or detection of conditions that would corrupt or render an image unusable. The JCDB detects such an SFR signal and interrupts the MP (e.g., before check processing is carried further); e.g., JCDB detects the SFR and latches the source thereof in a Stop Flow Accumulator (contained on the JCDB) for interrogation by the MP. Use of such Stop Flow Requests will reduce the number of documents, time, and complexity that would later be involved in "error recovery" .
An example is as follows: a "leading edge not found" fault might be detected at NS unit (signal from the tracking logic received, alerting the IM to arrival of a document but the edge of the document cannot be determined from the image data). As a result, the N/S unit would issue a SFR to processor MP through the JCDB, terminating further processing of all images from this document—as well as of other items upstream of this item in the document processor's track. The MP first notifies the document processor to stop the flow of documents, and then finishes handling the image data it has confidence in (e.g., compressed images downstream of the point of fault detection). The mishandled document, and any documents that followed it through the transport, will have to be recovered from the document processor's pockets and repassed through the document processor on resumption of processing. This system may be set up for various Stop Flow requirements (according to the application involved) and to automatically, programmatically "mask" certain faults accordingly. For example some application could require a SFR if the trailing edge of a document is not found, yet other applications could be more forgiving and not require a SFR for this situation.
The advantage of this Stop Flow feature is that it reduces the number of documents that have to be repassed through the document processor when a SFR occurs because the processor is notified immediately when an error condition generates a stop flow request, and because it is associated with a particular item or event; thus antecedent documents (conditions) can be handled normally, thus reducing the number of documents that must be specially handled to recover from an SFR event. Monitor &*-**™ t Fig. It
The design of module IM also accommodates adding an optional Camera Quality Monitor unit (CQM) 1-13. The CQM monitors normalized image data and normalize scaled image data from the NS boards, as well as compressed image data being sent from the HCs to the JCDB. Preferably, one CQM monitors data on the IM for the front image, the other monitors data for the rear image. A variety of problems associated with the camera can be detected by analysis of the data collected at these points. As one example: when components in illumination system 1-3 age, lamp output may dim. Monitoring the normalized data and checking for a long term change in average document brightness, can allow one to notify service personnel that replacement or adjustment is required before the images are badly degraded. The front and rear images are each monitored by their own CQM.
N/S Details (.Also see Figs.1-5. Described below.;
The Normalizer/Scaler function is, as above noted, preferably implemented by a two board set (e.g., see Fig. 1: Master 1-7 processing the four even-numbered channels from Digitizer 1-5; Slave 1-9 similarly processing the four odd-numbered channels therefrom) . These boards 1-7, 1-9 operate in synchronism, with one, the master, arbitrating. Master N/S Unit 1-7 also provides "document synchronization" signals (such as PDOCPRES_N) used by downstream boards to identify that a document is being imaged. The N/S pair also provide image data to the CAR unit.
The N/S units employ a scan capture circuit to capture pre-normalized image data for use in generating the numeric tables required to normalize the image data during normal operation. Upon a command from the DTI, each N/S starts collecting pre-normalized image data; each channel has its own capture circuit capturing 16 consecutive scans of image data (128 bytes of data per scan) into a first in first out (FIFO) memory that can be read by the DTI board for transfer again to the MP for processing. Data is collected as part of a calibration procedure in which the camera images a uniformly white target, and then a black target, to provide "ideal" white and black stimuli to the camera. Software running on the MP executes an algorithm that transforms this raw data into data tables suitable for normalizing image data. The data tables are transferred from the MP to the DTI and then stored in "Look Up" tables on the N/S.
The N/S normalizes incoming image data using the information in the look-up tables (preferably two 64K X 8 RAM, one for test). Normalization is accomplished by using the image data and its position in the scan to sequentially address the look-up table. The content of each address has been precalculated by the normalization software running on the MP from pre-normalized data (collected during calibration) to be the normalized value of the image data for that pre-normalized value and position in the scan. There are 128 possible output values for each of the 128 pixel positions in the scan.
Scaling is preferably based on PROM look up tables; the tables allow the selection through software running on the MP of up to 8 "scaling factors", (from a factor of 11/16), to a factor of 5/16 along with method of scaling. The preferred scaling is 8/16 (1/2) using a 2X2 pixel window average method.
Document Edge Detection is performed with an algorithm whereby each channel compares the average brightness of "the present scan line with the average brightness of the transport with no document present" . When no document is present, the detector averages and stores the average brightness of the transport ("no document brightness"). Document tracking logic in the document processor notifies these circuits that a document is about to enter, or leave, the range of the camera, and that comparisons should begin. If a significant change of brightness occurs in a channel, then (by this) that channel indicates that it has found an edge. Leading edges are found when any one of the 8 detectors finds that brightness has increased above the stored "no document average" by a preferred threshold of 14 gray levels, and PDOCPRES_N is asserted. Trailing edges are declared if all 8 channels have found that they have returned to within a preferred 18 gray levels of the "no document" average; then PDOCPRES_N is cleared. If a leading edge is expected and not found by any detectors after a prescribed time, then "leading edge indication" is "forced" by asserting PDOCPRES_N. This implies that the document image may have problems, so this occurrence is flagged in "document status". The status data is transferred along with the normalized and scaled (and later compressed) image data as it moves through the system.
Details of Histogram Compressor. Figs. 1-3;
The Histogram/Compressor functions are, as noted above, preferably performed in two like HC boards per side. Module IM accommodates up to 4 HC boards per side if height throughput or increased levels of fault detection so require. Each H/C contains two image paths, and each path is capable of processing and compressing an entire image as received from the master and slave NS boards. The JCDB's state machine determines that an image has arrived via the PDOCPRES_N signal and selects the HC path to receive the images from the master and slave NS; paths are assigned in a rotating sequence and according to the type of fault detection required. The preferred configuration uses rotating redundancy to periodically verify HC operation; thus paths can be assigned in the following order: 0-1, 2,
3, 0, 1, 2-3, 0, 1, 2, 3, 0-1 etc. The HC boards contain logic that detects if the output data of the two paths does not match (when they are both processing the same image); such a detection is an indication that a fault has occurred on one of the two paths. "Rotating redundancy" is useful for checking for "hard failures". If "transient failures" are of concern to a particular application, then additional HC boards can be added to the system and image data can be processed fully redundantly. As a feature hereof, this HC unit allows comparison of two compressed images in their entirety (e.g., vs by-channels).
A HC path receives the combined data from the master and slave NS boards and combines the odd and even channel data into one image which is stored in a buffer memory (3-1, Fig. 3). As the data is being input to the memory, it is sampled by a histogram width-detector 3-5, and a top and bottom edge detector 3-3. The histogram circuit 3-7 builds a histogram of image data values composing the document image. The histogram data is used to modify the image (as described later). The top and bottom edge detectors 3-3 sample the image data and attempt to find the top and bottom of the document image (within the full height of the input data scan).
There are 1024 pixels in the preferred embodiment of the IM scan; a typical 2.75 inch item will occupy only 550 pixels of a scan. The preferred algorithm looks for the highest and lowest occurrences of "non-background data" within all the scans of a document. In the preferred embodiment, a "non-background" is declared if there is a gray level transition within 5 pixels. The points determined for top and bottom, and width, are made available to the Digital Signa, Processor (DSP 3-15) which uses them when pulling the document image from the buffer. Finding the width (e.g., lead and trail edges) and the top and bottom of images allows the Digital Signal Processor 3-15 (DSP) to handle only genuine document data and ignore all else thus improving throughput and providing images that do not have extraneous data associated with them.
The software running on DSP 3-15 executes an algorithm which uses the histogram data to alter the original image data in the HC memory in a manner that reduces contrast and stretches gray levels thereby improving legibility and compression characteristics (by means of the Remap 3-13).
The HC compresses images in "two pass" fashion.
The first pass preferably uses a JPEG Quantization Matrix (QM) . The QM is a set of numeric factors used by the JPEG algorithm to affect image compressibility and is known to workers. The preferred (or target) QM is picked in the course of IM design to optimize both image quality and the size of the resulting compressed image packet. The same QM is used for first pass compression of all images.
When "First compression" is complete, the size of the compressed image data packet is checked, along with image size. If the packet is too large (considering the size of the image), an alternative QM is used for "second pass compression"; the alternative picked is determined from experience and is embodied in the algorithm executing on the DSP. If the packet is smaller than expected, a less aggressive QM may be used; otherwise the target QM is reused.
After second pass compression, the HC builds a standard JPEG compressed image structure in Output Buffer 3-25 and appends to it the status bits received from the NS 3-9, along with its own status bits and other information.
When compression is complete, data is queued in Output Buffer 3-25, and the JCDB is notified that the path- work is completed. The JCDB notifies the HC that it is ready to receive the compressed data packet (and status) when it selects the HC for output. Thus, a typical such HC unit (e.g., 1-11A for bits 0, 1) may do the following:
1 - Sample alternate pixels and assign one of 64 "bins"; 2 - Locate image-top, -bottom
3 - Measure image-length
4 - Using its DSP (digital signal processor) histogram data, generate a "Remap Table" to reduce contrast and stretch grey level, and so make the bit image more legible and more compressible;
5 - Initiate first compression;
6 - Monitor results of First compression (and "status" from associated N/S unit) to build "header" in JCDB. And, if maximum allowed compressed size is exceeded, will accordingly translate images (from the bottom); and initiate a second compression if needed; and
7 - Prepare bit set for application to JCDB.
JCDB;
Buffer JCDB 1-15 (Fig. 1) provides the interface between the HC paths and the MP 1-16. It preferably comprises a set (e.g., 16 X 128 kilobyte) of compressed image buffer units, preferably operated as eight redundant buffer pairs (Primary-, Secondary-). The contents of the primary buffer is compared to the contents of the secondary buffer, upon readout, as a method of validating hardware performance. These buffers are directly readable and writable by the MP via a local bus extension of the microprocessor of the MP. Should extra buffering capacity be needed, (e.g., the POL is busy), the JCDB can fall out of redundant operation and use its 8 secondary buffers to store images. When this happens, the MP notifies the document processor to STOP document flow because there is a danger of running out of buffer space (which could necessitate a complex recovery procedure). The JCDB, as noted earlier, has a state machine which controls the selection of NS-to-HC paths for input of images from the NSs and the transfer of compressed image packets out of the HCs to the JCDB. Preferably, the MP can program the state machine on the JCDB to operate module IM with 2, 3, or 4 HC boards. The JCDB also receives the Stop Flow Request lines from all boards. If any of these lines is activated the JCDB notifies the MP. Each of the NS, HC, or JCDB boards in module IM can have any of their detected fault conditions activated, by command from the MP, to trigger a "Stop Flow request" line to the JCDB. Results of this activation were discussed earlier. Functions of JCDBi
(a) Controls selection of paths (input/output) to/from HC;
Thus, can be programmed to operate with 2, 3 or 4 HC boards; also controls run-time testing of HC units by selecting both of its paths to process image data. Also, monitors "busy" and "ready" signals from HC units to accommodate document throughput;
(b) Verifies size of transferred image; (c) Provides "Sync Tag" data from a received image to DTI for verification;
(d) Generates "interrupts" (e.g., "STOP Flow"?) to MP and DTI upon receipt of image from HC array.
Image Digitizer 1-5: Image Digitizer ID 1-5, receives the output of the camera in the form of 8 pairs of analog signals (e.g., 8-odd, 8-even pixels from 8-segment CCPD) plus a clock signal, and a "camera side" identification (front vs rear). The "side information" is determined at time of manufacture by installing a jumper. Preferably, this ID output comprises: eight channels of serial, digitized video, with parity for each channel. The preferred clock rate is 8.125 MHz, with a 900ns idle period between every scanline (16.7us). Digitizer ID also has a diagnostic memory that can be loaded with test patterns by the DTI over a special (C and D) bus.
When the document processor tracking logic does not indicate any documents approaching the camera, the DTI senses this and commands a diagnostic RAM to inject test patterns into the eight channels. Identical data will be output on all 8 channels. These patterns are used by the NS to test normalization and scaling logic (by comparing data); they are also similarly used to test the interface between the ID and NS boards. The NS normalization circuit must alter its operation when test patterns are present on the channels to assure that the outputs of the multiple normalizer circuits are identical. The test portion of the normalizer look-up table is used to provide a 1 to 1 mapping when test patterns are activated, thus assuring that "data-in" matches "data-out".
JPEG Compression Path Functions:
The Block diagram in Fig. 3 (as aforenoted) indicates the preferred functions performed along a single JPEG processing/compression path on a preferred H/C PCB. Two independent JPEG processing/compression paths are implemented on each H/C PCB. These blocks are characterized briefly as follows: a - N/S Inputs:
The JPEG compression path receives
"Normalized Scaled" (N/S) image data from the N/S hardware in two serial, 8-bit paths (see two N/S inputs, to 3-1, -3, -5, -7,
-9). Each path contains 4 Channels of time- multiplexed, scaled scans; scaled-down to a maximum of 137.5 pixels per inch. The pixels are delivered cross both paths at a rate of 50ns per pixel (20 MHz). Each pixel is 7 bits representing 128 grey levels with the MSB ("Most Significant Bit") of the 8- bit path set to 0.
b - Input Buffer, 3-1: The entire scaled image is stored in a lMeg x 8 Buffer 3-1 as it is delivered from the N/S hardware.
c - Top/Bottom Detect, 3-3:
As the image is being received from the N/S hardware and stored in Input Buffer 3-1, the
Top/Bottom Detect Circuitry 3-3 finds the Top and Bottom of the scaled image. The picture shown in Fig. 3 illustrates an exaggerated skew condition that may occur (usually to a much smaller degree) as the sorter moves the check past the CCD camera. 5 The Top/Bottom Circuitry "finds" the extreme
Top and Bottom locations.
d - Width Detect, 3-5:
Lead-edge and Trail-edge detection is done by the N/S hardware. The Width Detect
10 hardware (Stage 3-5) counts the number of scaled pixels scans between the extreme Lead and Trail edges as shown in Fig. 3.
e - Histogram, 3-7:
As the image is being received from the N/S
15 hardware and stored in the Input Buffer, a
Histogram Circuit 3-7 samples every other pixel (one pixel every 50ns, switching between both signals paths from the N/S hardware) and sorts the 128 grey level
20 pixels into one of 64 Histogram Bins (LSB, or Least Significant Bit, of 7-bit pixel value is ignored). Once the image is entirely received, the final Histogram will have sorted 50% of the scaled image's pixels, selected in a checkerboard arrangement across the entire image, into the 64 bins that comprise the Histogram 3-7.
Pixels above and below the detected Top and Bottom borders of the image are included in the Histogram, but pixels outside the detected Width are excluded. Each bin can record up to 64K occurrences of adjacent grey levels and will "peg" at 64K if that bin overflows.
f - Status Block 3-9:
Status and other image-related information is appended to the Trailing Edge of the scaled image as it is delivered to the JPEG processing/compression path from the N/S hardware. This information is buffered in Status block 3-9 so that it can be retrieved and sent on with the compressed image.
g - Transpose and Block Unit 3-11: The N/S hardware delivers scaled image data in Vertical scans spanning 8 Channels and starting from either the Top or Bottom of the image (depending on camera orientation), 5 while moving vertically. JPEG compression requires image pixels to be compressed in 8x8, raster-scanned. Pixel Blocks. Pixel Blocks are then scanned in the same method as pixels within a block: horizontally
10 across the Width of the image, and then vertically between the Top and the Bottom. This "Transposition and Blocking" Operation (3-11) can be started at any of the four corners of the image. Pixels outside the
15 Top and Bottom borders of the image, and outside the Width of the image, are not addressed during transposition—thus eliminating useless track background data from the Compressed Image Packet. The
20 output of the Transpose and Block Circuitry
3-11 serves as an address to the 1 Meg x 8 Input Buffer 3-1. (See Fig. 6, also.)
h - Remap Unit 3-13: Image processing is done by changing (Remapping at 3-13) the 7-bit scaled pixels in the Input Buffer 3-1 to 8-bit image- processed pixels as the image is being compressed.
h' - DSP Unit 3-15:
A DSP Unit 3-15 (Digital Signal Processor) determines the Remap Values by reading the
64-bin Histogram (from 3-7) along with inputs from 3-9, 3-5, 3-3), executing the image processing algorithms and loading the 128 resultant "Remap Values" into a "1 to 1" Remap Memory at 3-13.
j - LSI JPEG Compression, 3-17: A DCT (Discrete Cosine Transform), quantization and Huffman coding for JPEG compression are done in a Chip Set, (3-17, preferably from the LSI Logic Corporation) . This Chip Set 3-17 does not need to be "throttled" as it compresses an image, and, therefore, "compression latency" through this Chip Set is deterministic. Scaled, remapped pixels are sent through the LSI Chip Set 3-17 at a 25 MHz rate. The LSI output is a 32-bit word containing JPEG compressed data (but does not have bytes "stuffed" or restart codes inserted, these being inserted at Block 3-19).
k - Packet Counter 3-21:
This Counter records the amount of compressed data emerging from the LSI JPEG Compression Chip Set 3-17. This Count is used by the DSP Unit 3-15 to retrieve the results of first pass compression so a second QM can be selected for the second (final) compression.
1 - Post Processor 3-19:
The Post Processor performs the JPEG required byte stuffing and Restart Marker insertion not done by the LSI Chip Set. The Post Processor funnels the 32-bit outputs from the LSI Chip Set down to a 16-bit wide
Output Buffer 3-25, inserting stuffed bytes and restart codes when necessary. To ease the location of entropy-coded segments (compressed data between Restart Markers) in the final Compressed Image Packet, restart codes are inserted on 32-bit boundaries by the Post Processor Unit 3-19
(codes recorded in 3-23). Therefore, the Post Processor must insert padding bytes, when required, to align restart codes on double-word boundaries.
m - Restart Marker Table (FIFO) 3-23:
A table of addresses pointing to the location of Restart Markers in the Compressed Image Packet are stored in the Restart Marker FIFO 3-23 and can be used by the DSP 3-15 to truncate images on entropy coded segment boundaries.
n - Output Buffer 3-25:
The JPEG processing/compression path assembles the Compressed Image Packet in a 64K x 16 Output Buffer 3-25. Once a
Compressed Image Packet is completely assembled, it can be burst-transferred to the JCDB next component (JPEG Compressed Data Buffer) in the imaging system.
In sum, the DSP 3-15 performs the following functions: i - After an image is input from the N/S, the DSP 3-15 reads the 64-bin Histogram, executes the image processing algorithms and loads the resultant Remap Table into the Remap RAM (at 3-13);
ii - Prior to first pass compression, the DSP loads the default QM and Huffman tables into the LSI JPEG Compression hardware;
iii- After first pass compression, the DSP retrieves the results (of first pass compression) from the Packet Count Register 3-21, and the image dimensions from the Top/Bottom and Width Detectors, then selects a new (second)
QM and Huffman table to achieve the proper compression results for a second compression, and loads this chosen QM and Huffman table into the LSI Chip Set;
iv - After second pass compression is complete, the DSP inserts all pertinent header information into the Compressed Image Packet in Output Buffer 3-25.
JPEG Processing/Compression Functions Implemented in Hardware:
In the previous section, the functions of the
JPEG processing/compression path were introduced. In this section, the hardware-intensive functions are described in detail. The DSP functions, although critical to operation of the JPEG processing/compression path, are software based and not discussed in this section. The hardware functions next detailed are as follows, then detailed:
* Histogram
* Top/Bottom Detection * Width Detection
* Status Capture
* Transpose and Block - Input Buffer * Remap
* LSI JPEG Compression
* JPEG Post Processor
* Restart Marker FIFO * Packet Count
* Output Buffer
The DSP, via software, executes the "grey level stretch and contrast reduction" algorithms by reading the 64 bin histogram of an image and generating a 128-entry remap table to be used by the Remap hardware. Also, the DSP chooses and loads the LSI chips set with the QMs and Huffman tables needed for first and second pass compression. Lastly, the DSP builds the necessary header data, (surrounding the compressed image data), and loads this header data into the Output Buffer 3-25.
Histogram:
The Histogram is generated in unit 3-7 as the image is being delivered from the N/S hardware and stored in the Input Buffer 3-1. The Histogram could have been generated without any custom hardware by having the DSP unit scan the contents of the Input Buffer after the image has been loaded. This, however, would greatly increase the time required for the JPEG processing/compression path to process the image because:
1 - Not only would it take relatively long
(compared to custom hardware) for the DSP to subsample the contents of the Input Buffer, but 2 - this sampling could not be done until AFTER the image was stored in the Input Buffer.
Thus, by generating a Histogram with custom hardware WHILE the image is being input, the Histogram is available to the DSP as soon as the image has been input.
The only apparent drawback to so generating the Histogram during input of the image is that the Histogram is being generated BEFORE the Top and Bottom boundaries of the check image have been determined (lead and trail edges of image are predetermined by N/S hardware). This means that pixels representing track background, above the top and below the bottom borders of the check image, are included in the Histogram. This will cause the "darkest" bins of the Histogram to be fuller than if the boundaries were known prior to generating a Histogram. This apparent drawback, in reality, is inconsequential, since the image processing algorithms, selected performed by the DSP according hereto, are not significantly affected by the quantity of pixels in the "darkest" bins.
The Histogram circuitry samples 50% of the pixels in the image by sampling pixels every 50ns (20 MHz input from N/S) from one of the two serial N/S inputs (Channels 0, 2, 4, and 6) and then from the other (Channels 1, 3, 5, and 7). The net effect is that the pixels sampled are arranged in a checkerboard pattern across the image. As a pixel is sampled, the "upper 6 bits" of the 7-bit pixel (LSB is ignored) are used to address a 64-bin Histogram RAM. The contents of the RAM at the addressed location are extracted, incremented by 1 and reloaded into the RAM. These iterations of reading, incrementing and writing must be done every 50ns. Although a 20 MHz synchronous clock is used to begin every iteration, the reading, incrementing and writing are performed with asynchronous, pipelined logic using 2 CPLDs (Complex Programmable Logic Device) and 2 SRAMs (Static Random Access Memory) .
Each of the 64 bins is 16-bits wide and can therefore record 64K occurrences of adjacent grey levels (e.g., 100 0101 and 100 0100 are sorted into bin 33). If, during the "Read-Increment-Write" iterations, any of the bins become full (reach a count of OxFFFF), this bin will NOT overflow with any additional pixels (reset to 0x0000) because the bin will peg at OxFFFF. This is essential for the image processing algorithms to get a good approximation of the grey level content and distribution in the image.
Transpose and Block Input Buffer: The Input Buffer 3-1 is used to store the contents of the scaled, normalized image so that it can be transposed, grouped into 8x8 pixel blocks and sent to the compression hardware. Not only is it necessary to store the entire image for transposition purposes, but it is essential, to achieve predictable compression results, to perform "two-pass compression" on a fixed input image. The Transpose and Block function (see 3-11) is used to generate the 20-bit address needed to access the IMeg x 8 Input Buffer. It is this function that first stores the image data into the Input Buffer 3-1 as it is delivered from the N/S, and then removes the data in transposed, 8x8 blocks during each of the two compression passes.
To understand how this is done, it is important to know how the IMeg x 8 Input Buffer 3-1 is spatially divided. This arrangement is shown in Figs. 5, 6. Since the Input Buffer is byte-wide, each byte will hold one 7-bit pixel (MSB is 0). The IMeg x 8 buffer is first divided into eight, horizontal, 128K x 8 Channel Buffers representing the 8 channels of image data from the N/S (e.g., see Figs. 5, 6A) . The 3 MSBs of the 20 address lines access one of the 8 channels.
Each 128K x 8 Channel Buffer is divided into an
186 x 11 array (Fig. 6B) of 64-pixel blocks. The reason for this arrangement will be explained later. Eleven of the 20 address bits are used to select the block within the
Channel Buffer. Finally, each block is divided into an 8x8 array of pixels with the 6 LSBs of the 20 address bits used to select the pixel within a block (Fig. 6C).
The Channel Buffers are arranged in an array of blocks composed of 11 horizontal rows by 186 vertical columns. Because the maximum scaling resolution acceptable by the JPEG processing/compression hardware is 137.5 dpi (ll/16ths scaling of 200 dpi captured image), the maximum number of pixels processed, per vertical scan in one channel, is 88. Since there are 8 rows of pixels per block, 11 blocks are needed to cover the vertical dimension (Fog. 6B) . One vertical column of 11 blocks consumes 704 (11 x 64) memory locations. Since there are 131072 (128K) locations in each Channel Buffer, up to 186 (131072 / 704 ■ 186.18) columns of 11 blocks can fit inside the memory. At 137.5 dpi (ll/16ths scaling), this translates into a horizontal measure of 10.8 inches ... more than enough to store the maximum check width of 9 inches. It is rare that some of the columns on the "far left" (see Fig. 5) of the Channel Buffer would ever be used, since the average check length would be between 6 and 7 inches. Likewise, if scale factors less than 137.5 dpi (ll/16ths scaling) are used, the block rows near the top of the Channel Buffer will not be used. For example, at 100 dpi (8/16ths scaling), the top three rows of the Channel Buffer will not be used. To access the Input Buffer, (3-1, Fig. 3), the hardware implements five counters listed below (and shown in Fig. 6D) :
Channel Counter 3 bits Accesses the 8 horizontal Channel Buffers that comprise the Input Buffer
Horizonal Block Counter 8 bits Accesses the 11 rows of blocks in each Channel Buffer Vertical Block Counter 4 bits Accesses the 186 columns of blocks in each Channel Buffer
Column Counter 3 bits Accesses the 8 columns of pixels in each 8x8 block Row Counter 3 bits Accesses the 8 horizontal rows of pixels in each 8x8 block
Because the Vertical Block Counter and Horizontal
Block Counter are not fully utilized (11 of 16 and 186 of 256 respectively), their combined values are remapped, via a "PROM look up table", to form a Block pointer value consisting of 11 bits. In total, therefore, 20 bits are needed to access the entire Input Buffer 3-1 (Figs. 3, 6D).
By controlling the five counters described above (Fig. 6D) and changing their operating parameters (when they are incremented, when they are reset or preset and whether they count up or down), the hardware is able to store a vertically scanned image in the Input Buffer 3-1 and then unload, in 8x8 blocks, starting from either of the four corners of the image.
The following describes detailed control of the five counters during image capture and image compression.
Counter control during image capture:
During image capture, the N/S hardware delivers vertical scans of pixel data in two serial pixel streams as shown in Fig. 2 (and see further details in Figs. l'-5', described below) .
To store ONE vertical scan, the JPEG processing/compression hardware must store these pixel pairs at two separate locations in its IMeg X 8 Input Buffer (Fig. 5) simultaneously. For this reason, the least significant bit of the Channel Counter is not used during image capture because pixels are received in pairs (from channel 0/1, 2/3, 4/5 and 6/7 respectively) and must be stored in two separate channels of the Input Buffer simultaneously. As valid pixel pairs are delivered from the N/S hardware, the Row Counter (Fig. 6D) is incremented. When the Row Counter rolls over (every 8 pixels), the Vertical Block Counter is incremented. When the Vertical Block Counter reaches the scale factor (e.g., when the Vertical Block Counter reaches 11 for ll/16ths scaling), the Vertical Block Counter is reset and the Channel Counter is incremented by 2. When the Channel Counter rolls over from 6 to 0, the Column Counter is incremented. In this fashion one vertical scan is stored in the Input Buffer (3-1).
Every 8 scans, the Column Counter will roll over and the Horizontal Block Counter is incremented. Once the image is completely input, the Horizontal Block Counter will indicate the width of the scaled image in units of 8 pixel blocks.
Counter Control During Compression:
To understand the control of the counters during compression, use Fig. 5 for reference. In this figure, an image has been stored in the Input Buffer during image capture. The valid portion of the image lies between Horizontal Block (HBLK) #0 and HBLK #120 (as determined by the width count) and between channel 5, Vertical Block (VBLK) #1 and channel 0, VBLK #9. In this example, compression will begin at top left corner of the image and will proceed left to right, top to bottom. Compression direction can begin in any of the 4 corners of the captured image and counter control will be slightly different for each compression orientation.
To start, the Channel Counter is preset to 5, the VBLK counter is preset to 1, the HBLK counter is preset to 120, the Column Counter is preset to 7 and the Row Counter is preset to 7 (upper left-hand corner of image). For JPEG compression, pixels must be raster scanned one pixel at a time, one block (64 pixels in 8X8 array) at a time. If the image is being compressed left to right, top to bottom (as it is in this example), all 64 pixels within the block must be read in the same direction before another block is read. Therefore, to read one 8X8 block, the Column Counter is continuously decremented (from 7 to 0) and every time the Column Counter rolls over (7 to 0) the Row Counter is decremented. Once the entire block has been read (64 pixels), the HBLK counter is decremented (moving left to right) and 64 pixels are read from the next block. After all 120 blocks are read in a row (HBLK counter reaches 0), the VBLK counter is decremented (moving top to bottom), and the HBLK counter is preset to the image width (left side or 120 in this example). Once the VBLK, HBLK, Column and Row counters reach 0 (bottom right corner of a channel), the Channel Counter is decremented, the VBLK counter is preset with one less than the scale factor (top of the channel or 10 in the case of ll/16ths scaling), the HBLK counter is preset to the width (left or 120 in this example) and the Column and Row counters are preset to 7 (top right corner of a block). Counter control proceeds until the pixel at the bottom right corner of the image is read (in this example CH - 0, VBLK = 9, HBLK - 0, Column ■ 0, Row = 0); then compression terminates.
Top/Bottom Detection (e.g., see 3-3, Figs. 3,7): The top and bottom borders of an image are detected as the image is being delivered from the N/S hardware and stored in the Input Buffer 3-1.
The borders could have been found without any custom hardware by having the DSP scan the contents of the Input Buffer after the image has been loaded. This, however, would greatly increase the time required for the JPEG processing/compression path to process the image (e.g., not only would it take relatively long for the DSP to sample the contents of the Input Buffer, but this sampling could not be done till AFTER the image was stored in the Input Buffer) . By finding the borders with custom hardware WHILE the image is being input, the borders are available to the DSP as soon as the image has been input— thus saving time.
The top and bottom borders detected by the hardware are relative to the scan direction of data received from the N/S hardware. In other words, scans are assumed to be "pixel-wide" vertical strips spanning all 8 channels starting at the bottom block of channel 0 and proceeding to the top block of channel 7. The bottom border of the image is the lowest occurrence of image data relative to the scan direction. This may actually be the top of the physical image, depending on. which camera is capturing the image (front or back) and how the check was loaded in the track (e.g. upside down?)
The resolution of the border detection circuitry is in units of blocks, not pixels. Therefore, borders are reported according to the channel and block row within the channel. A typical example of bottom and top borders detected by the hardware are BOTTOM = channel 0, row 6; TOP - channel 6, row 3. Borders are determined on a "scan-by-scan" basis. Initially, prior to receiving the first vertical scan of an image, the top border is set at the extreme BOTTOM (CH 0, block 0) and the bottom is set at the extreme top (CH 7, block A). As the first scan is delivered from the N/S, the JPEG compression path hardware compares each pixel to a given "border" threshold value. Pixels "whiter" than the threshold are considered part of the check image, while pixels "darker" than the threshold are considered track background. Every time 5 consecutive image pixels are seen, the hardware declares a "transition". The first transition seen in a scan is considered the "bottom" of that scan while the last transition in a scan is considered to be the "top". After each scan, the top and bottom borders of the latest scan are sent to a Top and Bottom Port. Only if the latest scan's top border is "higher" than the value presently existing in the Top port will the Top port be updated to the new value. Likewise, only if the latest scan's bottom border is "lower" than the value presently existing in the Bottom port will the Bottom port be updated. The net result is that, after the entire image has been input, the Top port will contain the channel and block number of the "highest" point of the image and the Bottom port will contain the channel and block number of the "lowest" point of the image. The reason the hardware requires five (5) CONSECUTIVE image pixels before declaring a transition is to prevent the top/bottom circuitry from erroneously declaring a border based on dust or scrap (noise) in the track that might exceed the value of the selected threshold for a few consecutive pixels. Selection of five has been determined experimentally.
Although, this concept of border detection may seem straight-forward, it is complicated by the fact that the vertical scans are not delivered to the H/C by a single pixel stream sequentially from the bottom to the top of the scan. Rather, data is delivered simultaneously from two pixel streams representing data from odd/even channels (0, 2, 4 and 6 and channels 1, 3, 5 and 7), respectively. To find a border of a single scan based on two interleaved streams of pixels, the hardware implements two independent transition detectors TD-1, TD-2 (one for each N/S pixel stream; see Fig. 7; Fig. 7 illustrates top/bottom detection from two serial pixel streams). When transitions are detected, the hardware considers which of the two detectors found the transition, the value of the Channel Counter and the value of the Vertical Block Counter (see the Transpose and Block description) to see where, in the scan, the transition took place. Based on this information, the appropriate Top or Bottom port can be updated.
Width Detection:
The top and bottom borders are detected by the JPEG processing/compression path hardware, but the right (leading) edge and left (trailing) edge are detected upstream by the N/S hardware. The N/S hardware, subsequently, only "qualifies" the data between the lead and trail edges when it delivers pixel data to the JPEG processing/compression (P/C) paths. The N/S hardware will only qualify data in groups of 8 vertical scans. In other words, the N/S hardware frames the image in the horizontal dimension on block boundaries. Therefore, the only action the JPEG processing/compression hardware need perform to detect the width of the image (in block units) is to count every eighth qualified scan from the N/S unit. By doing this, the JPEG processing/compression hardware is able to furnish the horizontal dimension of the image to the DSP as soon as the image is input from the N/S.
Status Capture:
When the N/S hardware sends image data to the JPEG P/C path, it appends a burst of image-related "status" information on the "back porch" of the image transfer. The JPEG P/C path must take this information, supplement it with "status data" pertinent to the P/C hardware and attach it to the now Compressed Image Packet that is to be sent to the rest of the check imaging system via the JCDB (output of Fig. 3).
The JPEG P/C hardware accomplishes this function by buffering the "status data" as it is received from the
N/S (buffered at 3-9 Fig. 3), and then making this data available to the DSP for inclusion in the final Compressed Image Packet.
Remap (See 3 13, Fig. 3):
The normalized and scaled image received from the N/S unit, and stored in Input Buffer 3-15 (Fig. 3), is processed prior to compression. As stated earlier, the image processing algorithms are executed by the DSP, based on the image dimensions and histogram generated by the hardware as the image was being input from the N/S. Once the algorithms are executed, the image could be updated by having the DSP update every pixel in the Input Buffer to its new image processed value. Such a method would take a relatively long time, since the DSP would have to fetch, update and write every pixel in the Input Buffer. A much faster method is preferred: i.e., to remap pixels "on the fly" as they are sequentially sent to the compression hardware. To implement this method, the DSP generates a 128-entry "remap table" (one for every pixel grey level). During compression, every pixel (7 bits) pulled from the Input Buffer addresses the 128 entry table (implemented in high speed RAM at 3-13) and the output from the table is the remapped, image processed value (8 bits) for that pixel. This "remapped pixel" (value) is then sent to the compression hardware (i.e., to 3-17, and beyond).
LSI JPEG Compression (see 3-17, Figure 3): The JPEG defined Discrete Cosine Transform (DCT), quantization and Huffman coding is implemented in a two-chip set from LSI LOGIC CORPORATION. This chip set requires that image pixels be delivered in 8x8 blocks. The retrieval of image data from Input Buffer 3-1 in the proper (8x8) sequence is effected by the Transpose and Block logic 3-11. The pixels are remapped (image processed) "on the fly" (at unit 3-13) before being sent to the LSI chip set (i.e. to block 3-17). Because of the performance requirements of the high speed check imaging system, it is important to be able to compress the video image in Input Buffer 3-1 to a known packet size in a determinate amount of time. The image will be compressed once with the LSI chip set, using a standard QM and Huffman table provided by the DSP. Then, the result of this first compression, along with the image dimensions, will be used by the DSP to select a second QM and Huffman table which is then sent to the LSI chip set by the DSP. With this second QM and Huffman Table, this image (bit set) is then compressed a second time, then passed through Post Process unit 3-19 and stored in Output Buffer 3-25. This "dual compression" technique is a feature hereof, and will be understood to yield the desired compression packet size.
The design of the chip set by LSI LOGIC Corp. is such that the time required to compress an image can be determined by the amount of pixels to be compressed, as well as by the speed at which compression is executed. Even in the (statistically rare) case where JPEG compression results in EXPANSION, the LSI JPEG chip set will take no longer to execute. This feature is critical to performance in a high speed check imaging system like the indicated DP1800. To further increase compression speed and reduce the time a JPEG compression path requires to process an image, the operating clock frequency of the JPEG compression path is preferably increased from 20 MHz to 25 MHz during compression.
Packet Count (see 3-21, Figure 3): The JPEG P/C hardware need not save the results of "first pass compression". Therefore, many of the functions executed by Post Processor Unit 3-19 (e.g., byte stuffing, restart code insertion, writing to Output Buffer) are not done during "first pass compression". What is needed from first pass compression is merely an estimate of the result it yields. To provide this, the (32-bit) words output from the LSI LOGIC chip set 3-17 during compression are counted by the Packet Counter 3-21. The only inaccuracy in this count is that it does not include any "extra data" like stuffed bytes, restart codes or padded bytes that would be added to the compressed image by the Post Processor 3-19 (see Fig. 3). This extra data, however, is relatively insignificant, even for the most compressible images. fter "first pass compression", the DSP reads the Packet Counter and uses this "word-count" information to pick a Huffman Table and a QM for the "second compression."
JPEG Post Processor (See 3-19 and 3-17A Fig. 3): The LSI JPEG chip set outputs 32-bit words of
"entropy coded data" during compression. This data does not include the JPEG required "stuffed bytes" (a byte of 0x00 must follow every byte of OxFF entropy coded data), or the JPEG required "restart codes" (two bytes of OxFF and OxDy after each horizontal row of blocks has been compressed) or the "padding bytes" (OxFF) required by the application to align restart codes on 32-bit boundaries.
The JPEG compression hardware must take the 32-bit output from the LSI chip set, insert the necessary data described above and funnel this information down into the 16-bit-wide Output Buffer 3-25. To accomplish this, the JPEG Post Processor 3-19, must take one byte at a time from the 32-bit LSI chip set output, check to see if the byte equals OxFF, send this data to the Output Buffer and also send another byte of 0x00 if the previous byte was OxFF. After compressing each horizontal row of blocks, the Post Processor will insert the two-byte restart marker and the number of padding bytes required to align restart markers on 32-bit boundaries. JPEG standards require the restart marker to have a MOD 8 count component; and this is provided by the Post Processor hardware.
On average, it takes one clock cycle for the Post Processor to process each byte of data from the 32-bit (4 byte) LSI chip set ... two clock cycles if a 0x00 byte needs to be stuffed. While compressing "busy" portions of the image, it is possible (worst case) for the LSI chip set to output a 32-bit word 3 times for every four clocks. The Post Processor cannot keep pace with the LSI chip set during these "busy" portions of the image. To mitigate such "special circumstances", the JPEG compression hardware preferably also provides a IK x 32 FIFO buffer 3-17A between the LSI chip set output and the Post Processor Logic (i.e., between 3-17 and 3-19). This buffer 3-17A allows the LSI output data to be buffered, and not lost, while the Post Processor catches up during "un-busy" portions of the image. Since the Post Processor only operates during "second pass compression" (when the DSP has already selected a target QM, and when compression is more predictable), the probability of "busy" portions of image occurring is greatly reduced. Therefore, any FIFO buffer, let alone a IK FIFO, would rarely be used. In the statistically-rare case where the FIFO (3-17A) is being heavily used, the input to the LSI chip set will be throttled by the JPEG compression hardware when the FIFO (3-17A) reaches "half-full" (512 words waiting for Post processing). Although this feature will rarely be used, it prevents data from being lost, even in the most extreme case. In fact, images would have to EXPAND by a factor of 4 before this feature would ever kick-in. The only side effect of executing this feature is a slight increase in compression time due to the occasional "throttling" of the input to the LSI chip set (in 3-17).
Restart Marker FIFO (3-23, Figure 3): During "second pass compression", the JPEG P/C hardware generates and loads the compressed (entropy-coded) data into Output Buffer 3-25. After second pass compression is complete, the DSP builds the required header information around the entropy coded data packet. One of the required elements of the header is a table of offsets that will point to the END of each entropy coded segment in the Output Buffer. An entropy coded segment from the JPEG P/C hardware is comprised of the compressed data from one horizontal row of 8x8 blocks, ending with a restart marker. Providing this table allows the compressed image to be "gracefully" truncated by software on entropy coded segment boundaries. The DSP gets this "offset" information from the Restart Marker FIFO 3-23. During second pass compression, the Post Processor 3-19 must insert restart markers into the Output Buffer at the end of each entropy coded segment. When the Post Processor loads a restart marker into the Output Buffer, the address into the Output Buffer at that particular instant is loaded into the Restart Marker FIFO.
Output Buffer (3-25, Figure 3):
The 64K x 16 Output Buffer 3-25 is the physical location where the Compressed Image Packet is built by the JPEG compression hardware (entropy coded data) and by the DSP (header information). During second pass compression the entropy coded data is loaded into the Output Buffer. After second pass compression is complete, the DSP builds the remaining header information around the entropy coded data. Once this is complete, the Image Packet is ready for transfer to the next stage (see JCDB, or "JPEG compressed Data Buffer, Fig. 3) of the high speed check imaging system; and then the JPEG compression path is free to accept another image into its Input Buffer 3-1.
This "dual buffering" (Input Buffer and Output Buffer) so implemented in my JPEG processing/compression path according to this feature, enhances performance by allowing a second image to be received by a JPEG compression path before the first Compressed Image Packet is sent to the JCDB.
Sealer Invention fFigs. l'-5'.: Unlike prior arrangements for scaling image data, the subject "Mapping Sealer" is not microprocessor-based and performs a "mapping" (e.g., rather than simply implementing a scaling algorithm; e.g., like the ASIC of US 5,305,398)—and so can do "asymmetric scaling", can store multiple algorithms while selecting one as needed (while, by contrast, 5,305,398 can run only one set scaling algorithm) .
As a result, certain advantages accrue: e.g., this "Mapping Sealer" provides more flexibility in storing, selecting, and running new and different scaling algorithms; new scaling algorithms, not currently stored, can be easily loaded by simply changing the circuit ROMs with no hardware changes required; this sealer can implement asymmetric scaling; and it is not microprocessor based.
With this mapping function any scaling algorithm that can be defined as a two-dimensional adjacent pixel/adjacent scanline can be run; also asymmetric scaling can be implemented. There is no reason why the scaling algorithm (or scaling scale) here needs to be identical for adjacent pixel (vs adjacent scanline) scaling.
Multiple algorithms can be stored and selected for use on an as-needed basis by a single instruction to the NS command register.
Sealer Theory:
Following is the preferred theory for operating scaling circuits on our "Mapping Normalizer/Scaler (NS) boards ( e.g., see board in Fig. 5'); such are preferably used in the JPEG imaging module of a document imager/processor (e.g., of the type contemplated in US 5,305,398). Such a "Mapping Sealer" is shown in the block diagram of Fig. 1'.
General Discussion:
Here, one may assume that there are eight channels of image data that need scaling for each document side (front, rear). We contemplate two NS (Normalizer/Scaler) boards to scale the eight channels; thus, each NS board will scale 4 channels of image data. Each channel of data is scaled independently by one of four image scaling circuits called "Sealers". (Fig. 5' depicts such an NS board in block diagram form; one such "Sealer" is depicted in Fig. 1' . )
Image channel data will be understood as a collection of scan lines. Each scan line (e.g., see Fig. 4') makes up 1/8 of the total track image (top to bottom) . If you place scanlines next to one another you will get a strip of image that is 1/8 of a track high and that extends lengthwise (e.g., to infinity).
In Fig. 4', note where the full image is preferably divided into eight channels, with the scan-lines in each channel preferably comprised of 128 pixels, and each pixel represented by a 7-bit "word" .
Each scan line is 128 pixels tall. Image data is fed into the sealers sequentially, starting with pixel 0 of the first scan line and ending with pixel 127. After pixel
127, the sequence begins again with pixel 0 of the next scan line, and so on.
The input data to the sealers therefore has a pattern like the following: i Scan line 1 > ! — 5caπ line 2 ! p0,pl,p2,p3, pl23,pl24,pl25,pl26,pl27 ,p0,pl ,p2,p3, . . . .pl27
Resolution for such input data may be assumed as 200 pixels/inch of document. As it turns out, this resolution is too high, so one needs a means of reducing resolution without degrading image quality.
Reducing pixel resolution is the job of the subject "Mapping Sealer" (Fig. 1') on the NS board.
Sealer Operation:
The subject mapping sealer circuit (Fig. 1') performs a two-dimensional mapping of adjacent pixel values (document height) and adjacent scanline values (document length) to "new values" and adds valid/invalid markers. This "mapping" can reduce the image resolution from 200 dpi (dots per inch) to some lesser value. The sealer circuits preferably accomplish the mapping using "ROM Look-Up tables" (Sealer ROMs so coupled), that are arranged preferably in a pipeline architecture.
The ROM-based design means that all scaling calculations are performed ahead of time by the algorithm designer. Then the algorithm results are programmed into the sealer circuit ROMs (see R-l,R-2, R-3 in Fig. 2'; R-4, -4', -5 in Fig. 3'). The sealer circuits do not execute algorithmic calculations. The "new values" and valid/invalid markers come from the scaling algorithm results that are stored in the sealer ROMs. The sealer circuits "map" input values into the output algorithm results.
Since only the results of the scaling algorithm are needed, any adjacent pixel/adjacent scanline scaling algorithm that can be designed can be implemented in this design.
The scaling design according to this implemented feature can store eight (8) different scaling algorithms. Each algorithm is easily selected from the NS board's Command Register CR via a "scale factor" input (e.g., see Control Register CR in Fig. 5'). Of course, using larger ROMs would allow for more stored algorithms.
Here, pipelined architecture means that, once the pipeline is full, the entire two-dimensional scaling function occurs in one clock cycle (tic). Therefore, scaling can occur in "real time" (at a desired 8 MHz rate). Actual Scaling (Mapping!:
The sealer circuits implement the scaling mapping in a pipeline fashion. The first Sealer stage S-I (Figs. l',2') implements "adjacent pixel mapping" (document height).
As one can see from Fig. 2', a "current" (7-bit) pixel value goes (from INPUT to R-1) to the pixel ROM address. The "previous" pixel value goes to the previous pixel ROM address R-2. In addition, each ROM address gets a pixel count number p-c and associated scale factor. The pixel count number p-c keeps track of which pixel one is working on. Pixel count is reset at the end of every scan by an "end of scan" (e-os) signal. A "scale factor" input selects which of the eight 8 possible scaling algorithm results is active.
Therefore, the ROM address contains the pixel value, pixel number, and scaling algorithm index. These three input values point (map) to a unique output data value and a valid "marker", depending upon the scaling algorithm (used to separate the result file). The data output from each ROM is then fed to the address of an Adder ROM (R-3) that performs the addition-mapping. (Or whatever other mapping that may be programmed into it. )
In general, "stage one" (S-I, Figs. l',2') of this sealer does the following. Say we have an input data stream as follows:
! _ scan line 1 !— Scan line 2 !
Pl27,p0,pl ,p2,p3 p!24 ,pl25,pl26 ,pl27 , pO. pl , p2 , p3 pl27
pl27 and pO aze mapped to — -pθ' and declared 'valid' oz "invalid' pO and pi aze mapped to — -pi' and declared 'valid' or "invalid"
pl26 and pl27 are mapped to — pl27' and declared valid or invalid.
Then the entire sequence begins again.
The output data from stage one may now look like the following (possible 100 dpi scaling; notice the new values and valid/invalid markers "v,i"):
!- Scan line 1 !--- Scan line 2 --!
Pl27' , p0' .pl' ,p2' , p3' pl26' , pl27' . pO' . pi' , p2' , p3' pl27'
V 1 V 1 V 1 V 1 V 1 V V
Where p' is the new mapped pixel value and ("i/v") denotes a valid or invalid marker. The next stage S-II (Stage Two, Figs. l',3') of the sealer circuit maps pixel values from adjacent scan lines (document length; Note: Fig. 3' shows "dual port memory" and "adjacent scanline sealer" ) .
To get at adjacent scan lines we need a place to store one complete scan line (so the previous scan line can be recovered on a pixel-by-pixel basis and mapped with pixels from the "current" scan line). For this storage function a dual-port RAM is preferred (e.g., see R-dp). For the dual-port RAM address, we use the pixel count. Now we can store the "current" scan line pixel, while recalling the previous scan line pixel in a single operation.
As one can see from Fig. 3', the "current" scan line pixel value is stored in a "Scan Line ROM" R-4, along with scale factor and scan line count. The previous scan line pixel value is stored in a "Previous Scan Line ROM"
R-4', along with scale factor and scan line count.
Each ROM now has enough information (in the address) to "map" to a "new value" and a valid/invalid marker. The two "new values" are then fed to an Adder ROM R-5 for the addition function. Therefore, if the output of the first stage (S-I) has the data pattern:
i Scan line 1 -!- Scan line 2 — !
Pl27',p0',pl',p2',p3' pl26' , pl27' , pO' , pi' , p2' , p3' pl27' v i v i v i v i v i v v
Then: pO' Scanl and pO' Scan 2 are mapped to pO'' and marked valid or not. pi' Scanl and pi' Scan 2 are mapped to pi'' and marked valid or not.
p!27' SCI and pl27' SC2 are mapped to pl27'' and marked valid or not.
The data now looks like this at the R-5 output (possible 100 dpi scaling).
!- Scan line 1&2 ! — Scan line 2t3 — ! pO" . pi" , p2" , p3" pl26" ,pl27" , pO" ,pl" , p3" pl27"
1 1 1 1 1 1 V 1 V 1
The job is now complete. The output of the scaling circuits (S-OUT, Fig. 1') is a two-dimensional, scaled-down version of the input (S-IN). The scaled values and "valid" labels are dependent upon the scaling algorithm results stored in the sealer-ROMs. Pixels marked with "i" are invalid and are filtered-out of the data stream.
Sealer Features:
Multiple-scaling selections: The described Sealer circuits preferably use a 4- bit pixel and 4-bit scanline counter (e.g., see counter PC in Figs. 1' , 2' ) . Counter PC keeps track of pixel and scanline location within a 16-position window. Therefore, the sealers can implement any scaling algorithm that scales down the image by a multiple of 16. Possible scaling selections are 1/16, 2/16, 3/16, 4/16, 5/16, 6/16, 7/16, 8/16, 9/16, 10/16, 11/16, 12/16, 13/16, 14/16, 15/16, 16/16.
Of course, making the pixel and scanline counters larger would give more selections, 5 bits would give multiples of 32, 6 bits would give multiples of 64, etc.
Multiple -scaling algorithms:
The Sealer circuits preferably use a 3-bit "scale factor selection" code. This code allows 8 different scaling algorithms to be stored in the sealer ROMS simultaneously. Which scaling algorithm is run is determined by the 3-bit scale factor selection code. This code is accessed via the command register of the NS board. This selection code can activate different scaling factors such as 4/16 or 8/16; or it can activate different scaling algorithms such as 8/16 2X2 averaging or 8/16 bilinear interpolation, or whatever algorithm the designer wants to run; it is up to the algorithm designer and what he has programmed into the "Sealer ROMS" (e.g., see R-1,-2,-3,-4,- 4' ,-5 in Figs 1' , 2' ) .
Asymmetric scaling:
Because adjacent pixel scaling (height) is handled independently of adjacent scanline scaling (length), it is possible to implement "asymmetric scaling". There is no reason why adjacent pixel scaling has to use the same scaling factor or scaling algorithm as adjacent scanline scaling. Example: adjacent pixel scaling (document height) might be 7/16 and adjacent scanline scaling (document length) might be 9/16. This particular example would have the visual effect of stretching the document length-wise. One can see how this option gives the algorithm designer extra flexibility in deciding how to scale documents.
Variations:
Workers will see that certain features hereof may be modified. For instance, note that it is possible to replace the mentioned three "S-I ROMs" (see Fig. 2': "Current pixel ROM" R-1, "previous pixel" ROM R-2, and "Adder ROM" R-3) with one large ROM.
Also, the three "S-II ROMs ("scanline" ROM R-3, "previous scanline" ROM R-4 and "adder" ROM R-5) can also be replaced with one large ROM.
Preferred implementation of the sealers allows for storage of eight separate scaling algorithm-mappings.
This of course is limited only by the size of ROM selected.
The design could use larger ROMs and so accommodate more scaling algorithm-mappings.
Results. Advantages:
Workers will appreciate that such a "mapping sealer" arrangement has several advantages (e.g., over the ASIC Sealer of US 5,305,398, with its microprocessor-based single fixed scaling algorithm): e.g., the "Mapping Sealer" can provide more flexibility in storing, selecting, and running, new and different scaling algoriths; scaling algorithms, not currently stored, can be easily loaded by simply changing the circuit ROMs, with no hardware change required; it can implement asymmetric scaling. Since these sealer circuits perform a mapping function rather than just implementing a scaling algorithm, any scaling algorithm that can be defined as a two- dimensional "adjacent pixel/adjacent scanline mapping" can be run; and asymmetric scaling can be implemented. [There is no reason why the scaling algorithm or scaling scale needs to be identical for adjacent pixel vs adjacent scanline scaling.]
Also, multiple algorithms (the instant NS sealers store 8) are stored and can be selected for use on an as- needed basis by a single instruction to the NS command register, (vs US 5,305,398 which could run only one scaling algorithm. )
.And multiple scaling mappings can be stored in the mapping ROMs, with these mappings selected by changing the ROM addressing via the scale factor input. (This current NS implementation stores 8 different scaling mappings.) The number of mappings stored is, of course, dependent on the size of the ROMs chosen. Larger ROMs would give space for more mappings. Further, such sealer circuits can make use of dual port RAM for storage of a complete scanline for use in the adjacent scanline scaling. And since the sealer marks pixels as "valid" or "invalid", pixels marked invalid may be filtered-out of the data stream.
Synchronizing Tags:
Conventional Solution ("OLD WAY", for Synchronization:
In a more conventional solution for this problem, all transactions between processing stations would be performed on a first in, first out (FIFO) basis; while assuming, for instance, that four different associated data queues (front image, rear image, collateral document data, and image "status") will remain in synchronization. The only validation of genuine synchronization would be via the physical dimensions of an image (e.g., as embedded in the image data, along with "image status"). However, when processing documents of a single uniform size, this isn't worth much.
New Solution (Preferred Embodiment): We prefer, according to a feature hereof, to use a "sync tag identifier" (i.e., identifying bits that help synchronize) which is assigned to a document and is used to track that document through the Imaging Module. This "Tag" is preferably used by each software and hardware entity that performs functions on a per-document basis. [No "synchronization" or other time-based check should be inferred. ]
For instance, consider Document Processor DP schematically indicated in Fig. IB and including an image interface unit (board) IIB and associated processing software DP-S, fed by an Imaging Module IM, including a pair of Front-/Back image processing units A-l, A-5 to develop respective electronic, digital document image data, as aided by a Diagnostic/Transport Interface DTI. This electronic image data is passed to a Main Processor A-7, and may be stored in a Storage-Retrieval unit SRM, being linked to processor A-7 via a Point-to-Point Optical Link unit A-9, as known in the art.
In this system, it will be understood that an
Electronic Camera and an Image Digitizer operate to process video scan lines, but that they perform no operations on a per-document basis, and therefore do not use "sync tags".
Here, the "sync tags" are preferably arranged to originate with the Document Processor Software (e.g., see Fig. IB, element DP-S) as workers will appreciate.
Sync-Tags. In General:
Figure 2A illustrates a "flow" of "sync tag" information through the image processing electronics of a document processor (e.g., like that of Fig. IB, Imaging Module thereof) to a Storage and Retrieval Module, SRM. The "sync tags" preferably originate within the document processor software, and are returned to the document processor software, providing an end-to end check of the integrity of image generation.
The "sync tag" for a document is preferably assigned by software executing in the document processor. That software produces information indicating the operations that the document processor/imaging module are to perform on the document as it travels to its assigned sort-pocket; this is the "Dispose Command". Part of this Dispose Command is the sync tag and image information. The Dispose Command is transferred by the Image Interface IIB (Fig. IB) to the Imaging Module IM. Salient units of image processing electronics are indicated in Fig. 2A, including image digitizer ID, front CAR Port CF (accepts Courtesy-Amount-Reader data, as known in the art), with a Buffer JCDB (JPEG Data Buffer, also see Figs. IC and 2) fed by a Histogram/Compressor Stage H/C that is, in tern, by a pair of Normalizer/Scalers (Master N/S, M-S and Slave N/S, S-S). Sync-tag data is fed to Sealer M-S (e.g., from Document Process Software, so DP-S, Fig. IB).
A preferred Normalizer/Scaler organization (re sync-tag) is indicated in Fig. 9 as a Sync-Tag FIFO register 5-1 coupled between an interface 5-5 to the H/C stage and an input (DTI) interface 5-3, with a "Last Sync- Tag register" 5-7 in parallel therewith. Fault registers 5-8 and Status registers 5-9 are also so coupled.
A preferred organization of JPEG Compressed Data Buffer JCDB (e.g., see Figs. IC, 2A, 3 and 9) is indicated in Fig. 10 as a pair of Primary-, Redundant Memory buffers 6-3,6-2 coupled between H/C interface 5-5 (see Fig. 9) and Interface 6-4 to the Main Processor, with a Cross-Compare Stage 6-8 in parallel to Interface 6-4. A Sync-Tag Queue Unit 6-6 and associated DT (Diagnostic Transport) Interface 6-7 (see Fig. IB) also input by H/C Interface 5-5.
Diagnostic and Transport Interface Use of Sync Tag:
According to this embodiment, when a Diagnostic and Transport Interface (DTI) receives such "disposition information" (e.g.. Fig. IB, as above) from the document processor, it extracts the sync tag information and passes the sync tag value to Sync Tag FIFOs in the Normalizer/Scaler (N/S) units (e.g., see Fig. 9) for the front and back image processing electronics. Then, this disposition information is passed to the Main Processor (A-7, Fig. IB).
When the DTI receives an "interrupt" from the Normalizer/Scaler units, it begins a timeout for the item to complete compression. The DTI then reads the "Last Sync Tag Register" from both Normalizer/Scalers (e.g., see Fig. 9), and verifies that the sync tag that was read matches the sync-tag in the "Dispose Command". If either sync tag is "incorrect" (i.e., does not "match"), then the DTI requests the Main Processor to "Stop Flow" . When the DTI receives an interrupt from a JPEG Compressed Data Buffer (JCDB, Fig. 2A: described above), it reads the Sync Tag Queue for the interrupting JCDB. If the sync tag has the expected value, then the "timeout" for the item to complete compression is disabled. If the sync tag does not have the expected value, the DTI requests the Main Processor to "Stop Flow". If this is the second JCDB interrupt for this item (that is, if the interrupt from the JCDB from the other side for this item has been processed or has timed-out), then the "status" for this item is sent to the Main Processor A-7.
Svnc Tag Usage in Main Processor:
The Main Processor (A-7) compares the master N/S sync tag and the slave N/S sync tag in the JCDB memory buffer for the interrupting JCDB. If the sync tags do not match, then Processor A-7 uses the sync tags from the redundant JCDB memory buffer (see 6-2, Fig. 10) to determine if the fault lies in the JCDB memory buffer, or in the input data from the Histogram/Compressor (H/C bus, 6-5, Figs. 9,10 ). The "status" from the H/C within the JCDB memory buffer indicates if the H/C detected a mismatch in the sync tags as they were received from the N/S boards.
Main Processor A-7 compares the master N/S sync tags from the front and back JCDB buffers with the sync tag in the next queued "disposition information" and the sync tag in the "status" bits from the DTI, to verify that the sync tags from all four sources match. The main processor also transmits the "sync-tag" to the Document Processor Software (DPS, Fig. IB) when processing is complete.
Svnc tag Flow in Imaging Processing Electronics: Figure 2A illustrates sync tag "flow" through image processing electronics (for handling the image of one side (assume Front side) of a document. Note: The front and back sides of the document are processed by like, separate sets of electronics. The "intermediate" and "final" sync tags produced are examined by the programs executing in the Imaging Module, to verify that the sync tags remain in sequence for a particular side, and that they match between the two sides.
Assume each Normalizer/Scaler (Master and Slave) completes its processing of a document (image data) then if the Sync Tag FIFO is not empty (e.g., see Fig. 9), the system assigns the sync tag value at the "head" of the Sync Tag FIFO for the document. But, if the Sync Tag FIFO is empty, the Normalizer/Scaler assigns a sync tag value equal to the value in the "Last Sync Tag Register" for the document, and sets a Sync Tag FIFO "underrun bit" in one of its "fault registers" (e.g., see Fig. 9), internal to each Normalizer/Scaler.
In either case, the Normalizer/Scaler then stores the sync tag value in the "Last Sync Tag Register" (e.g.. Fig. 9), and then assembles and transfers the sync tag and status data for the document to the Histogram/Compressor and interrupts the DTI, and removes the entry at the head of the Sync Tag FIFO. The transfer of "status" data (conventionally developed as workers realize) from the Normalizer/Scaler to the Histogram/Compressor array H/C follows the final scan line of an image, using the same bus as the image data (e.g., illustrated exemplarily in Fig. 8). Following the transfer of the last pixel of an image, from the Normalizer/Scaler boards to the Histogram/Compressors, a "document present" signal (PDOCPRES_N) will usually remain in an inactive state, with the "valid video" signal (PVALID_N) inactive until the status and sync tag bits associated with the image data are ready for transfer to the Histogram/Compressor boards. Coincident with this PDOCPRES_N signal going inactive, the least significant byte of the sync tag from each Normalizer/Scaler board is transferred over a "processed video" (PVIDEO) bus (e.g., see Fig. 2A) .
At the next clock, the most significant byte of the sync tag from each Normalizer/Scaler board is transferred over the PVIDEO bus. During each of the next (30) clock cycles, one byte value may be transferred over the PVIDEO bus to the Histogram/Compressors by each Normalizer/Scaler. Multiple-byte information is transferred, least significant byte first. The Histogram/Compressor includes the sync tags and "status" bits received from the Normalizer/Scaler boards in its "compressed image buffer" (see Output Buffer, Fig. 3)—these bits are transferred to the JPEG Compressed Data Buffer. (e.g., see Fig. 2A) . The Histogram/Compressor compares the sync tag bits received from the Master Normalizer/Scaler board with those received from the Slave Normalizer/Scaler board, and a "fault" is declared if they are "unequal" . This fault data is also included in the image data that is transferred from the compressed image buffer to the JPEG Compressed Data Buffer.
The JPEG Compressed Data Buffer (Fig. 2A) extracts the sync tags from the Master N/S as data is received from one of the Histogram/Compressors . This sync tag is placed in a queue which can be read by the DTI . (When an entry is "read", it is removed from the queue.) An "interrupt" in presented to the DTI whenever this queue is not "empty" .
"New Way" Reprised: The "new way" here-described provides for a positive identification of electronic document image data at every processing station in an imaging module that performs operations "on a per-document basis" (e.g., see DPS and Main Processor in Fig. IB), from the time the image data is first delineated until its data packet is sent for storage to the database.
In the "old way", all transactions between processing entities would be performed on a first in, first out (FIFO) basis, in general, while assuming that the four different associated data queues (front image, the rear image, "collateral document information", and image status) remained in synchronization. But this isn't a reliable assumption, since synchronization could be verified only via physical document dimensions (e.g., as embedded in the image data), and by "image status"; however, when processing documents of uniform size, this is worth little.
But using "sync tags" as here taught can be quite advantageous; e.g., when made available to the DTI at the Normalizer/Scaler and JCDB boards to verify the integrity of the DTI's image processing electronics and its internally maintained queues, since each entry in a queue also contains the sync tag remembering that the sync- tags facilitate rapid, reliable detection of a malfunction which throws the Front/Rear image-bits out of sync. Also, sync tags may be used during debugging to easily identify the various pieces of image data and other, collateral, data associated with a particular document image. Having the sync tag embedded into image data (e.g., like "status") allows this fundamental information to be easily correlated, whereas the "old" way gives no such convenient identifier/synchronizer.
While the invention has been described in connection with the presently preferred embodiment, the principles, of the invention are capable of modification and change without departing from the spirit of the invention as set forth in the appended claims.

Claims

1. A method of electronic document-imaging wherein imaging means generates imaging-bits representing a given document and transfers these bits on a "per-document basis" to various successive electronic processing stages and, finally, to a data base storage means (SRM); this method including providing tag means adapted to create tag bits unique for each such imaged document and transferring these tag bits with the imaging bits for each document to each such processing stage that handles the imaging bits on a per document basis, and finally transferring said tag bits to SRM interface means for final matching and removal of the tag bits.
2. The invention of claim 1, wherein said tag bits are checked at some or all of said processing stages.
3. The invention of claim 2 , wherein said tag bits are embedded with the imaging bits for each document.
4. The invention of claim 3, wherein two or more imaging means are involved, each generating a respective set of imaging bits, with a common array of tag bits for each said document, these sets of imaging bits thereafter being presented to interface means including compare means to assure that the tag bits for such imaging bits of a given document are the same.
5. The invention of claim 4, wherein said imaging means constitute Front and Rear camera means generating respective Front and Rear sets of imaging bits, and merging each set with common set of tag bits uniquely identifying the common document.
6. The invention of claim 5, including transferring said Front and Rear sets of imaging bits/tag bits to common processor stages via a single transfer stage DT, along with "collateral bits" such as "status bits"; and also arranging this transfer stage DT to read, and check each pair of tag bits before transferring the image bits/tag bits onward; whereby a "FAULT" signal may be generated when they do not match.
7. The invention of claim 6, including storing all said pairs of bits in Queue means in such transfer stage, and arranging said transfer stage to automatically issue INTERRUPT wherever its said Queue means is "Not Empty" .
8. The invention of claim 7, including adapting associated program means to generate said tag bits and to monitor the match thereof before allowing a "valid" final transfer to said storage means.
9. The invention of claim 8, wherein such "collateral bits" are arranged to include "status bits" that are also transferred with their associated image bits.
10. The invention of claim 1, wherein said pre- document processing stages is arranged to include a Histogram/Compressor stage H/C fed by Normalizing/Sealing means N/S, and wherein said tag bits are transferred to said N/S means and thence to said H/C means.
11. The invention of claim 9, wherein said tag-bits are so transferred along with respective status bits, after the final scan of a document, using the same bus as for said imaging bits.
12. The invention of claim 10, wherein said N/S means is arranged to include "Last set register" means for storage of respective tag bits and later transfer to said H/C means.
13. The invention of claim 11, wherein said image bits are arranged to comprise JPEG data bits and are input from said H/C means to JPEG Data Buffer means in route to said final storage means.
14. The invention of claim 12, wherein said N/S means is arranged to include Tag Register means input from DT interface means and output to H/C interface means.
15. The invention of claim 13, wherein said N/S means is also arranged to include "Last tag" register means input by said Tag Register means and output to said DT interface means.
16. The invention of claim 14, wherein said N/S means is also arranged to include Fault register means and Status register means.
17. The invention of claim 15, wherein said DT interface means is arranged to include Queue means adapted to automatically be interrupted when "Not empty" and to periodically remove the data at the head of said Tag Register means.
18. The invention of claim 16, including coupling said H/C means to "Compressed Image Data buffer means which, in turn, is coupled to input said JPEG Buffer means.
19. The invention of claim 17, wherein both said buffer means are adapted to also store "status bits" with each imaging/tag bit set.
20. The invention of claim 13, wherein said N/S means comprises Master N/S means and Slave N/S means, each coupled to receive associated imaging/tag bits in parallel, and both are coupled to common compare means adapted to issue a "FAULT" signal if the tag bits therein do not match for a given set of imaging bits.
21. The invention of claim 19, wherein said "Collateral bits" include "Status bits" transferred with said imaging bits.
22. A method of electronic document-imaging comprising using imaging means to generate imaging-bits representing a given document, transferring these bits on a "per-document basis" to various successive electronic processing stages and, finally, to a data base storage means (SRM); also arranging document-tag means adapted to create tag bits unique for each such imaged document; transferring these tag bits with the imaging bits for each document to each such processing stage that handles the imaging bits in a per document basis, and finally transferring them to SRM interface means for final matching and removal of the tag bits.
23. The invention of claim, 22 as adapted to JPEG Processing wherein said tag bits are checked at some or all of said processing stages.
24. The invention of claim 23, wherein said tag bits are embedded with the imaging bits for each document.
PCT/US1995/014596 1994-11-04 1995-10-25 Automatic check handling, using sync tags WO1996014707A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US33482894A 1994-11-04 1994-11-04
US08/334,828 1994-11-04

Publications (1)

Publication Number Publication Date
WO1996014707A1 true WO1996014707A1 (en) 1996-05-17

Family

ID=23309023

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1995/014596 WO1996014707A1 (en) 1994-11-04 1995-10-25 Automatic check handling, using sync tags

Country Status (1)

Country Link
WO (1) WO1996014707A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171596A (en) * 2017-12-28 2018-06-15 广州华夏职业学院 A kind of multi task process analysis system and method for finance data
US11250398B1 (en) 2008-02-07 2022-02-15 United Services Automobile Association (Usaa) Systems and methods for mobile deposit of negotiable instruments
US11281903B1 (en) 2013-10-17 2022-03-22 United Services Automobile Association (Usaa) Character count determination for a digital image
US11295378B1 (en) 2010-06-08 2022-04-05 United Services Automobile Association (Usaa) Apparatuses, methods and systems for a video remote deposit capture platform
US11321679B1 (en) 2009-08-21 2022-05-03 United Services Automobile Association (Usaa) Systems and methods for processing an image of a check during mobile deposit
US11328267B1 (en) 2007-09-28 2022-05-10 United Services Automobile Association (Usaa) Systems and methods for digital signature detection
US11348075B1 (en) 2006-10-31 2022-05-31 United Services Automobile Association (Usaa) Systems and methods for remote deposit of checks
US11392912B1 (en) 2007-10-23 2022-07-19 United Services Automobile Association (Usaa) Image processing
US11461743B1 (en) 2006-10-31 2022-10-04 United Services Automobile Association (Usaa) Systems and methods for remote deposit of checks
US11544682B1 (en) 2012-01-05 2023-01-03 United Services Automobile Association (Usaa) System and method for storefront bank deposits
US11617006B1 (en) 2015-12-22 2023-03-28 United Services Automobile Associates (USAA) System and method for capturing audio or video data
US11676285B1 (en) 2018-04-27 2023-06-13 United Services Automobile Association (Usaa) System, computing device, and method for document detection
US11694268B1 (en) 2008-09-08 2023-07-04 United Services Automobile Association (Usaa) Systems and methods for live video financial deposit
US11721117B1 (en) 2009-03-04 2023-08-08 United Services Automobile Association (Usaa) Systems and methods of check processing with background removal
US11749007B1 (en) 2009-02-18 2023-09-05 United Services Automobile Association (Usaa) Systems and methods of check detection
US11756009B1 (en) 2009-08-19 2023-09-12 United Services Automobile Association (Usaa) Apparatuses, methods and systems for a publishing and subscribing platform of depositing negotiable instruments
US11900755B1 (en) 2020-11-30 2024-02-13 United Services Automobile Association (Usaa) System, computing device, and method for document detection and deposit processing
US12008522B1 (en) 2023-06-06 2024-06-11 United Services Automobile Association (Usaa) Apparatuses, methods and systems for a publishing and subscribing platform of depositing negotiable instruments

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0320713A2 (en) * 1987-12-18 1989-06-21 International Business Machines Corporation Document image processing system
EP0376312A2 (en) * 1988-12-29 1990-07-04 Canon Kabushiki Kaisha Image information processing apparatus
US5070404A (en) * 1990-05-15 1991-12-03 Bullock Communications, Inc. Method and apparatus for contemporaneous delivery of data
US5136665A (en) * 1988-02-02 1992-08-04 Canon Kabushiki Kaisha Two-sided original reading apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0320713A2 (en) * 1987-12-18 1989-06-21 International Business Machines Corporation Document image processing system
US5136665A (en) * 1988-02-02 1992-08-04 Canon Kabushiki Kaisha Two-sided original reading apparatus
EP0376312A2 (en) * 1988-12-29 1990-07-04 Canon Kabushiki Kaisha Image information processing apparatus
US5070404A (en) * 1990-05-15 1991-12-03 Bullock Communications, Inc. Method and apparatus for contemporaneous delivery of data

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11544944B1 (en) 2006-10-31 2023-01-03 United Services Automobile Association (Usaa) Digital camera processing system
US11429949B1 (en) 2006-10-31 2022-08-30 United Services Automobile Association (Usaa) Systems and methods for remote deposit of checks
US11875314B1 (en) 2006-10-31 2024-01-16 United Services Automobile Association (Usaa) Systems and methods for remote deposit of checks
US11625770B1 (en) 2006-10-31 2023-04-11 United Services Automobile Association (Usaa) Digital camera processing system
US11538015B1 (en) 2006-10-31 2022-12-27 United Services Automobile Association (Usaa) Systems and methods for remote deposit of checks
US11488405B1 (en) 2006-10-31 2022-11-01 United Services Automobile Association (Usaa) Systems and methods for remote deposit of checks
US11461743B1 (en) 2006-10-31 2022-10-04 United Services Automobile Association (Usaa) Systems and methods for remote deposit of checks
US11562332B1 (en) 2006-10-31 2023-01-24 United Services Automobile Association (Usaa) Systems and methods for remote deposit of checks
US11682222B1 (en) 2006-10-31 2023-06-20 United Services Automobile Associates (USAA) Digital camera processing system
US11348075B1 (en) 2006-10-31 2022-05-31 United Services Automobile Association (Usaa) Systems and methods for remote deposit of checks
US11682221B1 (en) 2006-10-31 2023-06-20 United Services Automobile Associates (USAA) Digital camera processing system
US11328267B1 (en) 2007-09-28 2022-05-10 United Services Automobile Association (Usaa) Systems and methods for digital signature detection
US11392912B1 (en) 2007-10-23 2022-07-19 United Services Automobile Association (Usaa) Image processing
US11250398B1 (en) 2008-02-07 2022-02-15 United Services Automobile Association (Usaa) Systems and methods for mobile deposit of negotiable instruments
US11531973B1 (en) 2008-02-07 2022-12-20 United Services Automobile Association (Usaa) Systems and methods for mobile deposit of negotiable instruments
US11694268B1 (en) 2008-09-08 2023-07-04 United Services Automobile Association (Usaa) Systems and methods for live video financial deposit
US11749007B1 (en) 2009-02-18 2023-09-05 United Services Automobile Association (Usaa) Systems and methods of check detection
US11721117B1 (en) 2009-03-04 2023-08-08 United Services Automobile Association (Usaa) Systems and methods of check processing with background removal
US11756009B1 (en) 2009-08-19 2023-09-12 United Services Automobile Association (Usaa) Apparatuses, methods and systems for a publishing and subscribing platform of depositing negotiable instruments
US11373149B1 (en) 2009-08-21 2022-06-28 United Services Automobile Association (Usaa) Systems and methods for monitoring and processing an image of a check during mobile deposit
US11321679B1 (en) 2009-08-21 2022-05-03 United Services Automobile Association (Usaa) Systems and methods for processing an image of a check during mobile deposit
US11321678B1 (en) 2009-08-21 2022-05-03 United Services Automobile Association (Usaa) Systems and methods for processing an image of a check during mobile deposit
US11341465B1 (en) 2009-08-21 2022-05-24 United Services Automobile Association (Usaa) Systems and methods for image monitoring of check during mobile deposit
US11373150B1 (en) 2009-08-21 2022-06-28 United Services Automobile Association (Usaa) Systems and methods for monitoring and processing an image of a check during mobile deposit
US11893628B1 (en) 2010-06-08 2024-02-06 United Services Automobile Association (Usaa) Apparatuses, methods and systems for a video remote deposit capture platform
US11295378B1 (en) 2010-06-08 2022-04-05 United Services Automobile Association (Usaa) Apparatuses, methods and systems for a video remote deposit capture platform
US11915310B1 (en) 2010-06-08 2024-02-27 United Services Automobile Association (Usaa) Apparatuses, methods and systems for a video remote deposit capture platform
US11295377B1 (en) 2010-06-08 2022-04-05 United Services Automobile Association (Usaa) Automatic remote deposit image preparation apparatuses, methods and systems
US11797960B1 (en) 2012-01-05 2023-10-24 United Services Automobile Association (Usaa) System and method for storefront bank deposits
US11544682B1 (en) 2012-01-05 2023-01-03 United Services Automobile Association (Usaa) System and method for storefront bank deposits
US11694462B1 (en) 2013-10-17 2023-07-04 United Services Automobile Association (Usaa) Character count determination for a digital image
US11281903B1 (en) 2013-10-17 2022-03-22 United Services Automobile Association (Usaa) Character count determination for a digital image
US11617006B1 (en) 2015-12-22 2023-03-28 United Services Automobile Associates (USAA) System and method for capturing audio or video data
CN108171596A (en) * 2017-12-28 2018-06-15 广州华夏职业学院 A kind of multi task process analysis system and method for finance data
US11676285B1 (en) 2018-04-27 2023-06-13 United Services Automobile Association (Usaa) System, computing device, and method for document detection
US11900755B1 (en) 2020-11-30 2024-02-13 United Services Automobile Association (Usaa) System, computing device, and method for document detection and deposit processing
US12008522B1 (en) 2023-06-06 2024-06-11 United Services Automobile Association (Usaa) Apparatuses, methods and systems for a publishing and subscribing platform of depositing negotiable instruments

Similar Documents

Publication Publication Date Title
US5784503A (en) Check reader utilizing sync-tags to match the images at the front and rear faces of a check
WO1996014707A1 (en) Automatic check handling, using sync tags
EP0113410B1 (en) Image processors
US5768446A (en) Document processing
US5528705A (en) JPEG synchronization tag
KR0159831B1 (en) Method for detecting defect
EP0320713B1 (en) Document image processing system
EP0658042B1 (en) Dropped-form document image compression
US4741047A (en) Information storage, retrieval and display system
JP2940936B2 (en) Tablespace identification method
US5862270A (en) Clock free two-dimensional barcode and method for printing and reading the same
US5007100A (en) Diagnostic system for a parallel pipelined image processing system
US5848192A (en) Method and apparatus for digital data compression
US4624013A (en) Linked component extraction circuit for image processor
US4468808A (en) Feature extraction system for digitized character information
NZ306769A (en) Automatic page registration and zone detection during forms processing
EP0807297A2 (en) Method and apparatus for separating foreground from background in images containing text
US6259829B1 (en) Check Reading apparatus and method utilizing sync tags for image matching
US5386482A (en) Address block location method and apparatus
US5287416A (en) Parallel pipelined image processor
Zhang et al. A new system for automatic understanding engineering drawings
JP2509448B2 (en) How to reduce the amount of image information
JP2679098B2 (en) Encoding processing device for contour detection image
JP3099540B2 (en) Image storage method used in optical character reader
JPH0721371A (en) Picture processor

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase