US5642440A - System using ergodic ensemble for image restoration - Google Patents

System using ergodic ensemble for image restoration Download PDF

Info

Publication number
US5642440A
US5642440A US08/351,707 US35170794A US5642440A US 5642440 A US5642440 A US 5642440A US 35170794 A US35170794 A US 35170794A US 5642440 A US5642440 A US 5642440A
Authority
US
United States
Prior art keywords
image
fourier transform
inverse
ensemble
under test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/351,707
Inventor
Kennth G. Leib
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Grumman Corp
Original Assignee
Grumman Aerospace Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Grumman Aerospace Corp filed Critical Grumman Aerospace Corp
Priority to US08/351,707 priority Critical patent/US5642440A/en
Assigned to GRUMMAN AEROSPACE CORPORATION reassignment GRUMMAN AEROSPACE CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEIB, KENNETH G.
Application granted granted Critical
Publication of US5642440A publication Critical patent/US5642440A/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/88Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document

Definitions

  • FIGS. 1A and 1B we can see pictorially what is done.
  • FIG. 1A we have the absolute value of amplitude for an image with the modulus of the inverse filter shown in FIG. 1B.
  • the first and third orders would have negative phase and the second and fourth, positive.
  • the spectrum amplitude and phase are much more complicated in distribution throughout the spatial frequency domain.
  • the present invention approaches the problem of restoration somewhat differently.
  • a source of deblurring is often guessed at and becomes the basis of restoration routines.
  • this invention it is assumed that the structure and frequencies of occurrence of the elements of the image are known, i.e., the elements that make up the text.
  • the following information is or can be made available by a sequence of measurements, some described here:
  • Absolute value e.g., 4.25 mm.
  • FIG. 1A is a plot of the absolute value of amplitude for an image
  • FIG. 1B is a plot of a modulus of an inverse filter corresponding to the image of FIG. 1A;
  • FIG. 2 is a schematic illustration of the general sequence for restoring an image in accordance with the present invention
  • FIG. 3 is a schematic illustration of a camera relative to a movable image
  • FIG. 4A is a plot of a line spacing scan wherein the image is maintained perpendicular to the optical axis
  • FIG. 4B is a plot similar to that of FIG. 4A but wherein the image is tilted 18° ;
  • FIG. 5 is a plot relating normalized focus position to a line scan
  • FIG. 6A is a matrix of focal conditions governing the production of a controlled blurred image
  • FIG. 6B is a first comparative series of conditions of object position
  • FIG. 6C is a second comparative series of conditions of object position
  • FIG. 7 is a logic diagram of a network for determining the angular orientation of a blurred image relative to an optical axis
  • FIG. 8 is a profile illustration of a scan in which the word and column spacing can be determined.
  • the invention is a system for capturing a blurred image on a camera, making specific measurements and, with the results of measurements and a priori input information, generating a specific inverse filter for restoring the blurred image.
  • a priori information is based upon the constancy (in our case, the English) language, i.e., the a priori inputs are determined because of this constancy.
  • Other languages have a constancy but the a priori inputs would be different.
  • German with its propensity for compound words, has longer words on the average and other a priori inputs would be somewhat different from English but be constant for the German language.
  • a blurred image can be looked at as the convolution of the unblurred image and the blur function (and sometimes additive noise).
  • the blurred function in the Fourier plane by the inverse of the blur function
  • restoration would be achieved. It is a simple idea; but there are many ramifications such as film and camera characteristics, motion, and focal conditions, shutter functions and in the case of text, the intraline interference. The many factors involved make the determination of the amplitude and phase of the inverse filter a difficult and unsure process.
  • the restoration invention involves the generation of a prescribed text used to make a filter based upon the characteristics of the text. Thus, one must know something about the text. For most applications, a priori knowledge of the font, point size, and line spacing are known. More comments about these factors are given later. Measurements are then made of text samples and this data is used to generate a random sample of sufficient size so that the optical Fourier Transform can be taken. Then, the amplitude and phase as a function of spatial frequency are used to generate an amplitude or phase-only filter. The latter can be digitized and made in either silver halide or a high efficiency dichromated gelatin or photopolymer. In the latter case it is referred to as a binary-phase-only-inverse-filter (BPOIF), a subject of current technical interest.
  • BPOIF binary-phase-only-inverse-filter
  • the technique for generating the filter is based upon the fact that the English language is ergodic, i.e., an infinite text message has certain properties that are similar to those of an infinite number of analyzed short text messages. In general a blurred and unknown message is unknown but we do know its general characteristics--font, point value, etc.
  • the invention is based upon using the known factors about the text and language to generate the optimum filter; and if the message size is sufficient, the filter is applicable.
  • FIG. 2 is a schematic of the generalized method in accordance with the present invention.
  • input data is provided at 10 which concerns such parameters as font and their point sizes.
  • analyses exist for word, sentence, and paragraph structure in terms of frequency of occurrence. This is indicated at 12.
  • the input data and the structure analysis establishes text characteristics as denoted by reference numeral 14. These characteristics may generate a random ergodic ensemble (random reference image) as indicated at 16.
  • Such a test image may be printed or displayed at 20 for various optical parameters (18) such as image distance, focal length, and changes in focal length. For any particular set of optical parameters, the printed or displayed image undergoes a Fourier Transform at 22.
  • the Fourier Transform is recorded at 24 and printed to scale as an inverse filter at 26.
  • the resulting filter is positioned at an inverse Fourier plane (28).
  • a particular image under test (30) undergoes a Fourier Transform at 32 and the resulting transformed image is projected onto the inverse Fourier plane where the inverse filter exists.
  • FIG. 3 illustrates the setup when the original blurred image is captured, perhaps in an ideal way.
  • the image is shown at its in-focus position with the dotted lines illustrating the region within which the image might be located at some other time.
  • a generic PC is used to capture and process the image under test.
  • 4A and 4B show the result of line spacing scans in which first, the image is perpendicular to the optical axis, and second, when it is tilted 18° .
  • a second set of controlled scans we can move the image between the limits of 2.increment.f of FIG. 3 and at each position make a scan.
  • line scan for tilt and no-tilt with .increment.f, we obtain the matrix of focal conditions of FIG.
  • FIG. 6A with the conditions necessary to obtain the matrix shown in FIG. 6B.
  • T, B, and S refer to the top of the scan, the bottom, and the standard (a priori known) scan. The latter is the same as at the center of a scan when the image is located at focus, the F position in FIG. 6B.
  • the process to obtain matrix elements can be achieved by the representative logic plan of FIG. 7. The latter figure and that of FIG. 6A should be considered together.
  • FIG. 7 is a logic diagram of a network for determining the angular orientation of a blurred image relative to an optical axis.
  • a vertical scan of a blurred image is made by a camera and the data relative to each vertical scan is digitized at 42. The intervals between signals correspond to spaces between lines and this is determined at 44.
  • a line selector 38 generates a standard line separation signal at output 40 which serves as an input to comparators 48, 46, and 50.
  • the top line comparator 48 makes a comparison between the line spacing at a particular point along the vertical scan and compares it with the standard. The results of the comparison will indicate whether the real time line spacing is less than the standard (52), equal to the standard (54), or greater than the standard (56).
  • the optical axis comparator 46 simply compares the real time line-space determination from 44 relative to the standard value at 43 and a binary 1 or 0 will result at its output.
  • a bottom line comparator 50 establishes the line-space determination at the bottom of a vertical scan and makes comparisons relative to the standard value (at 45). Thus, unique outputs will result if the bottom line space is less than the standard value (58), equal to the standard (60), or greater than the standard (62).
  • a first level of AND gates 64 and 66 has inputs connected to the top and bottom line comparators 48 and 50, respectively. Their respective outputs indicate the focal length differential indicated in the figure which is correlatable to the matrix of focal conditions in FIG. 6A.
  • a second level of AND gates comprises 68 and 70 which compare the indicated comparisons of the top and bottom line comparators 48 and 50 for providing additional entries to the matrix of focal conditions.
  • a third level of AND gates 72 and 74 has their inputs connected to the top and bottom line comparators 48 and 50 as indicated in the figure so as to provide additional entries for the matrix of focal conditions.
  • a final network of AND gates 78 and 80 has its inputs connected to still other output lines of the top and bottom line comparators 48 and 50 for establishing further entries in the matrix of focal conditions.
  • An output from AND gate 76 signifies that a view along the optical axis is currently present.
  • FIG. 8 illustrates a scan in which the word and column spacing can be determined. In similar fashion sentence, column, and paragraph spacing can be determined.
  • the random ergodic ensemble is used to generate a matrix of blurred conditions, + and -.
  • the blurred samples are generated, they are measured and compared with the "unknown" until the best match is achieved. This forms the basis of the first inverse filter and then with each restoration a less-defocussed sample is used for an inverse filter until gradually the inverse filter close to the clear text restores the imagery.
  • the line space causing high frequency characteristics is incrementally restored to the proper line spacing.
  • the ergodic ensemble provides the basis for restoring other information as well.
  • the ergodic ensemble inverse filter can be constructed in silver halide, dichromated gelatin or such materials as photopolymers available from duPont and Polaroid Corporations.

Abstract

Established text characteristics help generate a random ergodic ensemble of characters constituting a random text image. The image is scaled in accordance with different optical parameters such as lens position, focal length, and change in focal length. The test image is printed or displayed so that a Fourier Transform of the image can be taken and recorded. The recorded image is printed and constitutes an inverse filter. The inverse filter is positioned in an inverse Fourier Transform. An actual image (blurred text) undergoes a Fourier Transform and is then filtered by the inverse filter positioned at the inverse Fourier plane. The resulting filtered image then undergoes an inverse Fourier Transform so as to restore or at least greatly enhance the blurred image.

Description

RELATED CO-PENDING APPLICATION
The following co-pending application is related to the present invention: application Ser. No. 08/522,112, by the same inventor as the present application, entitled "Blurred Image Reading by Hi-Probability Word Selection," and assigned to the same assignee as the present application.
FIELD OF THE INVENTION BACKGROUND OF THE INVENTION
In some aspects of military work, access to blurred messages occurs. The need to read these occasional anomalies is obvious but present means often require time-consuming digital procedures using the various algorithms such as LaPlacian, high-pass filtering and others currently available.
One of the "standard" approaches of both optical and digital is to use an inverse filter. That is, in an optical system or its digital equivalent, one takes a Fourier Transform of the blurred image and places a filter whose character is to be determined in the Fourier or spatial frequency plane. If properly designed, the filter upon reimaging (taking another Fourier Transform) will bring a degree of restoration to the image, rendering it understandable. That means perfect restoration (in one or more operations) is not necessary, or sometimes not even possible. The basis of restoration is summarized in the following sequence of equations: ##EQU1## where the capital letters refer to the Fourier Transforms of the corresponding functions and (*) denotes convolution. The result, in principle, is the inverse filter which, when inserted in the Fourier plane, should provide image restoration.
In FIGS. 1A and 1B we can see pictorially what is done. In FIG. 1A we have the absolute value of amplitude for an image with the modulus of the inverse filter shown in FIG. 1B. In the simplest case the first and third orders would have negative phase and the second and fourth, positive. In reality the spectrum amplitude and phase are much more complicated in distribution throughout the spatial frequency domain.
Much work has been and is being done principally in the digital analysis world with such techniques as contrast enhancement routines, constrained least squares filtering, extended filters, optimizing mean square error filters, and other extensions or alterations of the Wiener filter. The work also includes the standard digital fare like high-pass filtering with convolution matrices, establishing median filters wherein each pixel is processed by giving it the median of its eight neighbors (in a 3×3 matrix) and Kalman filtering with various kernels. In others, adaptive filtering is performed. This is a technique of performing a large number of iterations of, in sequence, the Fourier Transform, assessment, modification, inverse transform, assessment, Fourier Transform, modification, and so forth. A priori knowledge or good guessing drive the modifications in the sequence. In some iterative routines, the investigator assumes that the degration must lie between or within a set of parameters and uses these to make appropriate modifications based upon this.
BRIEF DESCRIPTION OF THE PRESENT INVENTION
The present invention approaches the problem of restoration somewhat differently. For example, in several prior art approaches, a source of deblurring is often guessed at and becomes the basis of restoration routines. In this invention it is assumed that the structure and frequencies of occurrence of the elements of the image are known, i.e., the elements that make up the text. We assume by whatever means it takes that the following information is or can be made available by a sequence of measurements, some described here:
Alpha-numeric origins
Letter frequency
Probability of occurrence
Font and point values
Line spacing data
Orientation determinant
Absolute value, e.g., 4.25 mm.
Recording parameters
Focal length
Film type
f/#
Exposure time
Text Structure
#words, sentences, paragraphs
#letters/word
word distribution
#Capital letters
It has also been demonstrated that we can perform operations (to be described) that can measure the line spacing and from the data determine image position and/or the degree of keystoning (the data is not orthogonal to the recording axis when captured).
BRIEF DESCRIPTION OF THE FIGURES
The above-mentioned objects and advantages of the present invention will be more clearly understood when considered in conjunction with the accompanying drawings, in which:
FIG. 1A is a plot of the absolute value of amplitude for an image;
FIG. 1B is a plot of a modulus of an inverse filter corresponding to the image of FIG. 1A;
FIG. 2 is a schematic illustration of the general sequence for restoring an image in accordance with the present invention;
FIG. 3 is a schematic illustration of a camera relative to a movable image;
FIG. 4A is a plot of a line spacing scan wherein the image is maintained perpendicular to the optical axis;
FIG. 4B is a plot similar to that of FIG. 4A but wherein the image is tilted 18° ;
FIG. 5 is a plot relating normalized focus position to a line scan;
FIG. 6A is a matrix of focal conditions governing the production of a controlled blurred image;
FIG. 6B is a first comparative series of conditions of object position;
FIG. 6C is a second comparative series of conditions of object position;
FIG. 7 is a logic diagram of a network for determining the angular orientation of a blurred image relative to an optical axis;
FIG. 8 is a profile illustration of a scan in which the word and column spacing can be determined.
DETAILED DESCRIPTION OF THE INVENTION
The invention is a system for capturing a blurred image on a camera, making specific measurements and, with the results of measurements and a priori input information, generating a specific inverse filter for restoring the blurred image.
All a priori information is based upon the constancy (in our case, the English) language, i.e., the a priori inputs are determined because of this constancy. Other languages have a constancy but the a priori inputs would be different. For example, German, with its propensity for compound words, has longer words on the average and other a priori inputs would be somewhat different from English but be constant for the German language.
A blurred image can be looked at as the convolution of the unblurred image and the blur function (and sometimes additive noise). Thus, if one were to multiply the blurred function in the Fourier plane by the inverse of the blur function, restoration would be achieved. It is a simple idea; but there are many ramifications such as film and camera characteristics, motion, and focal conditions, shutter functions and in the case of text, the intraline interference. The many factors involved make the determination of the amplitude and phase of the inverse filter a difficult and unsure process.
The restoration invention involves the generation of a prescribed text used to make a filter based upon the characteristics of the text. Thus, one must know something about the text. For most applications, a priori knowledge of the font, point size, and line spacing are known. More comments about these factors are given later. Measurements are then made of text samples and this data is used to generate a random sample of sufficient size so that the optical Fourier Transform can be taken. Then, the amplitude and phase as a function of spatial frequency are used to generate an amplitude or phase-only filter. The latter can be digitized and made in either silver halide or a high efficiency dichromated gelatin or photopolymer. In the latter case it is referred to as a binary-phase-only-inverse-filter (BPOIF), a subject of current technical interest.
The technique for generating the filter is based upon the fact that the English language is ergodic, i.e., an infinite text message has certain properties that are similar to those of an infinite number of analyzed short text messages. In general a blurred and unknown message is unknown but we do know its general characteristics--font, point value, etc. The invention is based upon using the known factors about the text and language to generate the optimum filter; and if the message size is sufficient, the filter is applicable.
FIG. 2 is a schematic of the generalized method in accordance with the present invention. For various known text images, input data is provided at 10 which concerns such parameters as font and their point sizes. As discussed in this specification, analyses exist for word, sentence, and paragraph structure in terms of frequency of occurrence. This is indicated at 12. The input data and the structure analysis establishes text characteristics as denoted by reference numeral 14. These characteristics may generate a random ergodic ensemble (random reference image) as indicated at 16. Such a test image may be printed or displayed at 20 for various optical parameters (18) such as image distance, focal length, and changes in focal length. For any particular set of optical parameters, the printed or displayed image undergoes a Fourier Transform at 22. The Fourier Transform is recorded at 24 and printed to scale as an inverse filter at 26. The resulting filter is positioned at an inverse Fourier plane (28). A particular image under test (30) undergoes a Fourier Transform at 32 and the resulting transformed image is projected onto the inverse Fourier plane where the inverse filter exists. By subjecting the resulting filtered image to an inverse Fourier Transform at 34, a restored image 36, or at least a greatly enhanced image, is obtainable.
FIG. 3 illustrates the setup when the original blurred image is captured, perhaps in an ideal way. The image is shown at its in-focus position with the dotted lines illustrating the region within which the image might be located at some other time. The extent of the position variation is f=±.increment.f, where f represents the focal position of the pickup system consisting of the CCD camera and its associated lens. A generic PC is used to capture and process the image under test.
However oriented, two measurements would be conducted. These would be scans perpendicular to the apparent line orientation remembering that a test may be necessary to obtain the orientation as a blurred text may not easily be oriented by eye. When oriented, several measurements are made so as to take an average line spacing. The scans run through various distributions of blurred text so line spacing is not obtainable in one arbitrary scan. In a similar way, we scan along the center of each line in a prescribed sequence. The first set of measurements yields the line spacing under various .increment.f conditions as well as keystoning conditions. Keystoning occurs when the image tilts relative to an optical axis. FIGS. 4A and 4B show the result of line spacing scans in which first, the image is perpendicular to the optical axis, and second, when it is tilted 18° . Note should be made of the uniform spacing for the 0° orientation and the variability of line spacing with the keystoned orientation. In fact, one can use this measurement to determine the degree of keystoning. In a second set of controlled scans, we can move the image between the limits of 2.increment.f of FIG. 3 and at each position make a scan. We would obtain for the data the curve shown in FIG. 5 relating normalized focus position to the line scan. Finally, if we combine the two conditions, line scan for tilt and no-tilt, with .increment.f, we obtain the matrix of focal conditions of FIG. 6A with the conditions necessary to obtain the matrix shown in FIG. 6B. In FIG. 6A, the terms T, B, and S refer to the top of the scan, the bottom, and the standard (a priori known) scan. The latter is the same as at the center of a scan when the image is located at focus, the F position in FIG. 6B. In this invention the process to obtain matrix elements can be achieved by the representative logic plan of FIG. 7. The latter figure and that of FIG. 6A should be considered together.
FIG. 7 is a logic diagram of a network for determining the angular orientation of a blurred image relative to an optical axis. A vertical scan of a blurred image is made by a camera and the data relative to each vertical scan is digitized at 42. The intervals between signals correspond to spaces between lines and this is determined at 44. A line selector 38 generates a standard line separation signal at output 40 which serves as an input to comparators 48, 46, and 50. The top line comparator 48 makes a comparison between the line spacing at a particular point along the vertical scan and compares it with the standard. The results of the comparison will indicate whether the real time line spacing is less than the standard (52), equal to the standard (54), or greater than the standard (56). The optical axis comparator 46 simply compares the real time line-space determination from 44 relative to the standard value at 43 and a binary 1 or 0 will result at its output.
A bottom line comparator 50 establishes the line-space determination at the bottom of a vertical scan and makes comparisons relative to the standard value (at 45). Thus, unique outputs will result if the bottom line space is less than the standard value (58), equal to the standard (60), or greater than the standard (62).
A first level of AND gates 64 and 66 has inputs connected to the top and bottom line comparators 48 and 50, respectively. Their respective outputs indicate the focal length differential indicated in the figure which is correlatable to the matrix of focal conditions in FIG. 6A.
A second level of AND gates comprises 68 and 70 which compare the indicated comparisons of the top and bottom line comparators 48 and 50 for providing additional entries to the matrix of focal conditions.
A third level of AND gates 72 and 74 has their inputs connected to the top and bottom line comparators 48 and 50 as indicated in the figure so as to provide additional entries for the matrix of focal conditions. A final network of AND gates 78 and 80 has its inputs connected to still other output lines of the top and bottom line comparators 48 and 50 for establishing further entries in the matrix of focal conditions. An output from AND gate 76 signifies that a view along the optical axis is currently present.
The horizontal scan through the text lines yields data which enables the word size to be determined. FIG. 8 illustrates a scan in which the word and column spacing can be determined. In similar fashion sentence, column, and paragraph spacing can be determined.
In a way that is similar to the line scan results graphically shown in FIG. 5, one may take a sequence of text line scans with the test image at various positions; measure the word, sentence, column, and paragraph spacings; and plot a curve of word space versus normalized focus position. This would yield an appropriate distribution. The results of such measurements coupled with those for line scan would then be used to determine corrections to an ergodic ensemble inverse filter because the test results could be compared with the unknown. In other words, the controlled blurring would be compared with the blind or unknown blurred image to effect corrections.
Once the random ergodic ensemble has been generated, it is used to generate a matrix of blurred conditions, + and -. After the blurred samples are generated, they are measured and compared with the "unknown" until the best match is achieved. This forms the basis of the first inverse filter and then with each restoration a less-defocussed sample is used for an inverse filter until gradually the inverse filter close to the clear text restores the imagery. The line space causing high frequency characteristics is incrementally restored to the proper line spacing. The ergodic ensemble provides the basis for restoring other information as well.
Finally, it has been stated that the ergodic ensemble inverse filter can be constructed in silver halide, dichromated gelatin or such materials as photopolymers available from duPont and Polaroid Corporations.
It should be understood that the invention is not limited to the exact details of construction shown and described herein for obvious modifications will occur to persons skilled in the art.

Claims (5)

I claim:
1. A method for deblurring a text image comprising the steps:
providing data of parameters for different fonts having varying point sizes;
providing a priori data regarding text structure including average size of words, sentences, and paragraphs for a preselected language;
combining the data of parameter and a priori data to generate a random ergodic ensemble of text characters;
generating an image of the ensemble;
producing a Fourier transform of the ensemble image;
recording the Fourier transform of the ensemble image;
producing an image of the recorded Fourier transform which serves as an inverse filter;
positioning the inverse filter at an inverse Fourier plane;
producing a Fourier transform of an image under test;
projecting the Fourier transform of an image under test through the inverse filter;
subjecting an inverse filtered image under test to an inverse Fourier transform; and
displaying an image under test after inverse Fourier transformation resulting in an enhancement of the image under test.
2. The method set forth in claim 1 wherein the a priori data is based upon constancy in a language and includes the average frequency of occurrence of all the letters in textual material written in a preselected language.
3. The method set forth in claim 1 wherein the a priori data includes the average distribution of word size in textural material written in a preselected language.
4. A method for deblurring a text image comprising the steps:
providing data of parameters for different fonts having varying point sizes;
providing a priori data regarding text structure including average size of words, sentences, and paragraphs for a preselected language;
combining the data of parameter and a priori data to generate a random ergodic ensemble of text characters;
subjecting the generated random ergodic ensemble to a plurality of optical parameters simulating variations of an optical system which views normal text;
generating an image of the ensemble;
producing a Fourier transform of the ensemble image;
recording the Fourier transform of the ensemble image;
producing an image of the recorded Fourier transform which serves as an inverse filter;
positioning the inverse filter at an inverse Fourier plane;
producing a Fourier transform of an image under test;
projecting the Fourier transform of an image under test through the inverse filter;
subjecting an inverse filtered image under test to an inverse Fourier transform; and
displaying an image under test after inverse Fourier transformation resulting in an enhancement of the image under test.
5. The method set forth in claim 4 wherein the a priori data includes the average:
(a) frequency of occurrence of all the letters in textual material written in a preselected language; and
(b) distribution of word size in textural material written in the language.
US08/351,707 1994-12-08 1994-12-08 System using ergodic ensemble for image restoration Expired - Lifetime US5642440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/351,707 US5642440A (en) 1994-12-08 1994-12-08 System using ergodic ensemble for image restoration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/351,707 US5642440A (en) 1994-12-08 1994-12-08 System using ergodic ensemble for image restoration

Publications (1)

Publication Number Publication Date
US5642440A true US5642440A (en) 1997-06-24

Family

ID=23382025

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/351,707 Expired - Lifetime US5642440A (en) 1994-12-08 1994-12-08 System using ergodic ensemble for image restoration

Country Status (1)

Country Link
US (1) US5642440A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110116141A1 (en) * 2009-11-13 2011-05-19 Hui-Jan Chien Image processing method and image processing apparatus
US20150379695A1 (en) * 2013-03-04 2015-12-31 Fujifilm Corporation Restoration filter generation device and method, image processing device and method, imaging device, and non-transitory computer-readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3805596A (en) * 1972-02-24 1974-04-23 C Klahr High resolution ultrasonic imaging scanner
US3969017A (en) * 1972-07-01 1976-07-13 U.S. Philips Corporation Method of image enhancement
US4275265A (en) * 1978-10-02 1981-06-23 Wisconsin Alumni Research Foundation Complete substitution permutation enciphering and deciphering circuit
US4329020A (en) * 1980-01-30 1982-05-11 U.S. Philips Corporation Method of manufacturing inverse filters by holographic techniques
US4735486A (en) * 1985-03-29 1988-04-05 Grumman Aerospace Corporation Systems and methods for processing optical correlator memory devices
US5075896A (en) * 1989-10-25 1991-12-24 Xerox Corporation Character and phoneme recognition based on probability clustering
US5384863A (en) * 1991-11-19 1995-01-24 Xerox Corporation Methods and apparatus for automatic modification of semantically significant portions of a document without document image decoding
US5453844A (en) * 1993-07-21 1995-09-26 The University Of Rochester Image data coding and compression system utilizing controlled blurring

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3805596A (en) * 1972-02-24 1974-04-23 C Klahr High resolution ultrasonic imaging scanner
US3969017A (en) * 1972-07-01 1976-07-13 U.S. Philips Corporation Method of image enhancement
US4275265A (en) * 1978-10-02 1981-06-23 Wisconsin Alumni Research Foundation Complete substitution permutation enciphering and deciphering circuit
US4329020A (en) * 1980-01-30 1982-05-11 U.S. Philips Corporation Method of manufacturing inverse filters by holographic techniques
US4735486A (en) * 1985-03-29 1988-04-05 Grumman Aerospace Corporation Systems and methods for processing optical correlator memory devices
US5075896A (en) * 1989-10-25 1991-12-24 Xerox Corporation Character and phoneme recognition based on probability clustering
US5384863A (en) * 1991-11-19 1995-01-24 Xerox Corporation Methods and apparatus for automatic modification of semantically significant portions of a document without document image decoding
US5453844A (en) * 1993-07-21 1995-09-26 The University Of Rochester Image data coding and compression system utilizing controlled blurring

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Stroke, Optical Computing, Dec. 1972, pp. 24 41. IEEE Spectrum. *
Stroke, Optical Computing, Dec. 1972, pp. 24-41. IEEE Spectrum.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110116141A1 (en) * 2009-11-13 2011-05-19 Hui-Jan Chien Image processing method and image processing apparatus
US8233713B2 (en) * 2009-11-13 2012-07-31 Primax Electronics Ltd. Image processing method and image processing apparatus
US20150379695A1 (en) * 2013-03-04 2015-12-31 Fujifilm Corporation Restoration filter generation device and method, image processing device and method, imaging device, and non-transitory computer-readable medium
US9799101B2 (en) * 2013-03-04 2017-10-24 Fujifilm Corporation Restoration filter generation device and method, image processing device and method, imaging device, and non-transitory computer-readable medium

Similar Documents

Publication Publication Date Title
US6459818B1 (en) System for recovery of degraded images
Wen et al. Bispectral analysis and recovery of images distorted by a moving water surface
Kanungo et al. Nonlinear global and local document degradation models
Nieuwenhuizen et al. Deep learning for software-based turbulence mitigation in long-range imaging
CN109697442B (en) Training method and device of character recognition model
Hardie et al. Fusion of interpolated frames superresolution in the presence of atmospheric optical turbulence
US5642440A (en) System using ergodic ensemble for image restoration
Ljubenović et al. Plug-and-play approach to class-adapted blind image deblurring
Zhou et al. Parameter-free Gaussian PSF model for extended depth of field in brightfield microscopy
Rucci et al. Atmospheric optical turbulence mitigation using iterative image registration and least squares lucky look fusion
Dey Image Processing Masterclass with Python: 50+ Solutions and Techniques Solving Complex Digital Image Processing Challenges Using Numpy, Scipy, Pytorch and Keras (English Edition)
CN114663284A (en) Infrared thermal imaging panoramic image processing method, system and storage medium
Wong et al. Regularization-based modulation transfer function compensation for optical satellite image restoration using joint statistical model in curvelet domain
CN113888424A (en) Historical relic photo color restoration method and device, electronic equipment and storage medium
JP2003317095A (en) Method and program for image sharpening processing, recording medium where the image sharpening processing program is recorded, and image output device
Prette et al. Towards unsupervised multi-temporal satellite image super-resolution
US6282324B1 (en) Text image deblurring by high-probability word selection
Moisan Modeling and image processing
Moon et al. Continuous digital zooming using local self-similarity-based super-resolution for an asymmetric dual camera system
Caron Utilization of image phase information to achieve super-sampling
Nieuwenhuizen et al. Assessing the prospects for robust sub-diffraction limited super-resolution imaging with deep neural networks
Stroke Optical image deblurring methods
Jain et al. Evaluation of neural network algorithms for atmospheric turbulence mitigation
Ripley Bayesian methods of deconvolution and shape classification
Gu et al. Deep fusion prior for plenoptic super-resolution all-in-focus imaging

Legal Events

Date Code Title Description
AS Assignment

Owner name: GRUMMAN AEROSPACE CORPORATION

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LEIB, KENNETH G.;REEL/FRAME:007264/0375

Effective date: 19941201

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 12