EP0531923A2 - Verfahren und Gerät zur Grautonquantizierung - Google Patents

Verfahren und Gerät zur Grautonquantizierung Download PDF

Info

Publication number
EP0531923A2
EP0531923A2 EP92115296A EP92115296A EP0531923A2 EP 0531923 A2 EP0531923 A2 EP 0531923A2 EP 92115296 A EP92115296 A EP 92115296A EP 92115296 A EP92115296 A EP 92115296A EP 0531923 A2 EP0531923 A2 EP 0531923A2
Authority
EP
European Patent Office
Prior art keywords
image
values
pixel
pixels
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP92115296A
Other languages
English (en)
French (fr)
Other versions
EP0531923A3 (de
Inventor
Mohsen c/o Eastman Kodak Company Ghaderi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eastman Kodak Co
Original Assignee
Eastman Kodak Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eastman Kodak Co filed Critical Eastman Kodak Co
Publication of EP0531923A2 publication Critical patent/EP0531923A2/de
Publication of EP0531923A3 publication Critical patent/EP0531923A3/de
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/162Quantising the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/403Discrimination between the two tones in the picture signal of a two-tone original
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • the present invention is directed to image processing. It finds particular, although not exclusive, application to image-processing systems intended for textual and similar images.
  • Copiers and other image-capturing equipment scan analog documents and produce digital signals that represent the values of picture elements ("pixels") of which the copying system treats the source image as being comprised.
  • the source images are of the textual type. That is, the images are letters, numbers, and line-drawing graphics in which the original information lay completely in the presence of a black or white level; no information was initially intended to be conveyed by any shade of gray.
  • the imaging process must typically deal with shades of gray. This is partially because lighting environments of different images vary and partially because repeated copying and so forth has made different parts of the, say, "black” regions lighter than other parts. Additionally, the use of gray levels reduces the visibility of "jaggies" that result from the spatially discrete nature of the scanning process. As a consequence of these factors, an intolerable loss in legibility could result if the images were recorded simply as black or white, i.e., if the intensity levels were recorded in only one bit per pixel. Accordingly, it is necessary in most cases to record the images in a gray-scale (multi-bit) representation.
  • U.S. Patent No. 4,853,969 to Weideman describes an adaptive quantization technique, in which the high-intensity-resolution representation in the initial data capture is reduced for storage or processing so as to limit the cost or increase the speed of the apparatus for performing those functions.
  • the Weideman arrangement reduces the number of bits per pixel while retaining a fair amount of the effective resolution by adapting the values of a fixed number of quantization thresholds, and thus the sub-ranges that pairs of adjacent levels define, in accordance with the input values that the quantization process is to quantize.
  • image processing in which an image source is scanned to generate raw pixel values representing a raw version of the source's image, processed pixel values are generated by performing an image process on the raw pixel values, and the resulting processed image values are stored in an appropriate medium or the image that they represent is displayed.
  • image processing may comprise a set of one or more processing steps, each of which produces associated pixel-value outputs representing a version of the image by processing associated pixel-value inputs representing the values of corresponding pixels in at least one other version of the image, such as the raw version that results from the scanning process or another version produced by one of the other processing steps.
  • the image processing includes a quantization step.
  • each pixel has a background range derived for it from the values of other pixels in a version of the image. Quantization levels are then established outside the background range, and the pixel value outputs associated with the quantization step are generated by quantizing the pixel values associated with the quantization step in accordance with the set of quantization levels thus established.
  • a typical image-processing system 10 of the type in which the present invention might be employed produces for a memory 12 or a display 14 output data that represent an image obtained from an image source, such as microfilm 16 illuminated by a light source 18 and scanned by a scanner 20 to produce raw pixel values representing the scanned image.
  • an image source such as microfilm 16 illuminated by a light source 18 and scanned by a scanner 20 to produce raw pixel values representing the scanned image.
  • the expected image is textual, specifically, black characters on a white background, and lower values represent lighter areas while higher values represent darker areas.
  • the image process may employ a number of process steps, each of which receives as its input the raw version of the image produced by the scanner and/or one or more of the other versions of the image produced by other process steps and generates a modified version of that image as its output.
  • the salient features of the present invention are incorporated in a quantizer 22, which typically, although not necessarily, follows other image-processing elements, such as an adaptive filter 24, and may be followed by other elements.
  • the quantizer 22 quantizes the adaptive-filter output in accordance with parameters that it determines from the raw output of the scanner 20.
  • Quantizers typically receive pixel values expressed in a relatively large number M of bits per pixel and convert them to corresponding values expressed in a smaller number N of bits per pixel. In doing so, a quantizer at least implicitly establishes quantization thresholds and quantization sub-ranges. Each pair of successive quantization thresholds defines a quantization sub-range, and all pixels whose input values fall within a given sub-range are assigned the same output pixel value. That is, given an ordered sequence of quantisation levels T i , where T i-1 ⁇ T i ⁇ T i+1 , a pixel's output value is i if its input pixel value falls between T i-1 and T i .
  • the total quantizer input range is divided between a "background” range and a "signal" range separated by a boundary determined in accordance with the incoming pixel values in a version of the image being processed.
  • the quantization thresholds are then chosen to occupy the signal range--which in text-type images is typically the range occupied by characters--while remaining outside the background range so that the background range occupies only one of the sub-ranges that the quantization thresholds define.
  • the manner in which the background range is determined is not critical, but I have found that a good way to choose it is to employ as a threshold between the background and signal ranges a level that is roughly the average of the pixel values in the image version upon which the background range is based, or at least the average of those values that occupy a predetermined part of the range permitted by the number system in which the pixel values are provided.
  • both ends of the quantization-threshold range may be set adaptively; not only is one end of the range set in accordance with the background level, but the other end is set by other characteristics of image pixel values.
  • the particular manner in which the levels are determined from the signal-range data is not critical, I have found a certain variation-based method to be particularly beneficial.
  • the threshold sequence in the illustrated embodiment is determined in accordance with three image-dependent parameters BCK_LEVEL, IMG, and UPPER_THR. The manner in which these parameters are determined from the image will be described in detail below.
  • the lowest threshold is set equal to BCK_LEVEL; all input values below this are assigned an output value of zero.
  • BCK_LEVEL is established in such a manner as usually to have a value just a little above the general background level in the particular image being quantized.
  • the intermediate quantization thresholds can be determined in any number of ways, the one described in the Weideman patent mentioned above being one of them. Another way is simply to divide the range between BCK_LEVEL and UPPER_THR into 2 N -2 equal sub-ranges by equally spacing the remaining 2 N -3 quantization thresholds. Another example, which I use in the illustrated embodiment, is to divide that range between BCK_LEVEL and UPPER_THR into two parts, the lower part being the range between BCK_LEVEL and IMG and the upper part being the range between IMG and UPPER_THR.
  • IMG is so chosen, in a manner described below, as to be in the midst of the values assumed by pixels at "interesting" positions, such as the edges of characters, in the image. I put half of the quantization thresholds in each part, spacing them equally within each part so that all sub-ranges in the lower part are of the same size and all those in the upper part are of the same size, but all those in one part are not in general of the same size as those in the other part.
  • quantization operation does not necessarily require a reduction in the number of bits per pixel; imposition of quantization thresholds different from those implicitly employed in the input data might be used to generate output values different from the input values even though input and output values are expressed in the same number of bits per pixel.
  • quantization does not necessarily mean reduction in the number of bits per pixel, either generally or as applied to the present invention.
  • legibility benefits of the present invention can result even in applications in which there is no reduction in the number of bits per pixel.
  • the number of output bits per pixel is the same as the number of input bits per pixels or differs from it by only one, I prefer to divide the "signal" range above BCK_LEVEL into equal sub-ranges.
  • trunc(x) means truncation, i.e., taking the largest whole number that is less than x .
  • the parameters BCK_LEVEL, IMG, and UPPER_THR are determined.
  • the pixel values from which the parameters are derived could be drawn from the same version of the image as those compared with the quantization thresholds, e.g., in the illustrated embodiment, from the image version that the adaptive filter produces. This is not necessary, however; as was stated above, the illustrated embodiment derives the parameters BCK_LEVEL, IMG, and UPPER_THR from the raw version produced by the scanner 20 rather than from the version produced by the filter 24.
  • the quantizer In determining these parameters, it first calculates some intermediate values that it uses to update them. For each pixel 26 (Fig. 2), the quantizer establishes two windows, a larger window 28 and a smaller window 30.
  • the larger window 28 is a neighborhood within which an average neighborhood level M a and an "activity level" M d , i.e., a variation measure, are computed.
  • the inner window 30 is one over which a low-pass filter is applied to filter out noise before certain tests to be described below are applied.
  • the window sizes and shapes are merely exemplary; indeed, although the pixel 26 under consideration is shown as being centered in both windows, as it typically would be, such centering is not at all critical to the operation of the invention.
  • the quantizer begins producing output pixel values once the window lag has occurred. In each pixel time, the quantizer compares the input value with the established threshold values and generates an output in accordance with the result in the manner described above. It also computes or adjusts several intermediate values, which are used once every other scan line to update the values BCK_LEVEL, IMG and UPPER_THR so as to establish new threshold values for use on the next two scan lines.
  • Fig. 3A depicts the part of the routine that is performed at each pixel time.
  • the general purpose of this part of the routine is to adjust three intermediate values BCK_THR, HCHAR, and LCHAR, which are used at the end of every two scan lines to provide new values to BCK_LEVEL, IMG, and UPPER_THR.
  • the routine of Fig. 3A initializes BCK_THR, HCHAR, and LCHAR to values of 0, 128, and 200, respectively, and it establishes thresholds in accordance with BCK_LEVEL, IMG, and UPPER_THR values of 0, 180, and 220, respectively. That is, the quantization levels that the illustrated embodiment employs on the first scan line does not depend on the input values. This is not a necessary characteristic of the invention. Other embodiments, which may, for instance, use a two-pass operation or compute initial values from the several lines acquired before output-pixel-value generation commences, may employ input-dependent parameters from the very beginning. Ordinarily, however, the appearance of the first few scan lines is not important, and the illustrated embodiment thus assigns the first-line quantization levels arbitrarily. This is the function of step 32 of Fig. 3.
  • Fig. 3A departs from the usual flowchart convention and depicts two parts of the routine as occurring in parallel.
  • One part which begins with a decision step 34, is used to develop intermediate value BCK_THR, from which the background- and signal-range-determining parameter BCK_LEVEL is obtained.
  • BCK_THR Determination of BCK_THR is based on the mean M a of the raw-version pixel values in each larger window 28. As a result of this part of the routine, BCK_THR tends toward the average of these local mean values. In the illustrated embodiment, however, a local mean M a affects BCK_THR only if it is less than a predetermined value in the darker part of the input range. Specifically, the illustrated embodiment is intended for eight-bit input values, which range from 0 through 255, and the BCK_THR computation is based only on pixels in regions light enough that their M a values are less than 205. This is the purpose for step 34, which causes the BCK_THR-adjusting steps to be bypassed if M a is not less than 205.
  • step 36 determines whether the pixel under consideration is the first one in a scan line. If so, BCK_THR is simply set equal to M a , as block 38 indicates. Otherwise, BCK_THR is compared with M a , as block 40 indicates, and BCK_THR is then adjusted up or down toward the M a value, as blocks 42 and 44 indicate.
  • the adjustment increment l/K is the reciprocal of the width of the larger window; that is, it requires K pixel times to change BCK_THR by one unit out of the 256-unit range of possible input values.
  • HCHAR and LCHAR are the highest and lowest raw-version pixel values, respectively, that have been encountered in the current scan line, fall within the signal range, and meet certain criteria that brand them as being in the "interesting" parts of the image.
  • the determination of whether a pixel is "interesting" is made in step 46, which applies two tests. The first test determines whether the smaller-window mean M b exceeds the larger-window mean M a . The second test determines whether the variation within the larger window exceeds a general background variation level.
  • the "activity" level M d is a measure of the variation of the raw pixel values within the larger window 28. (Of course, there is no need for the activity level M d to be computed over the same window as the neighborhood average M a is, but I have found it convenient to use the same window for both.) Any variation measure can be used for this purpose, the common one being standard deviation. Because of computation-time considerations, however, I prefer to use as a variation measure the average absolute mean deviation.
  • P ij is the raw-version value of the j th pixel in the i th row
  • K is the larger-window width
  • L is the larger-window height
  • M w is the average level in the window over which the activity level is computed (and is, in the illustrated embodiment, equal to M a )
  • the upper-left-corner pixel in that window is the x th pixel is the y th row.
  • This value is compared with a general background activity level MD_THR, which is computed in such a way as to tend toward an activity-level value that would result from computing the variation over the entire raw version of the image.
  • MD_THR a general background activity level
  • a value strictly equal to one computed over the entire image cannot be obtained in a one-pass arrangement.
  • a good substitute can be obtained by using an approach depicted in Fig. 4 for determining MD_THR. If the filter 24 is of the type described in my United States patent application for a Method and Apparatus Spatially Variant Filtering filed on even date herewith, such an MD_THR value will be available from the filter 24.
  • the overall approach in the Fig. 4 routine is to keep a single value of the activity threshold MD_THR throughout a whole scan line but to update it at the end of each scan line in accordance with an intermediate variation-indicating value IMG_MD.
  • the routine so varies IMG_MD during the line scan that it tends toward the average of the variations in the windows of those pixels whose pixel variations exceed a level PREV_MD that the routine has identified as being the average variation in certain low-spatial-frequency regions of the image.
  • the Fig. 4 routine initializes the activity threshold MD_THR as well as three variables IMG_MD, PREV_MD, and BG_MD used in the routine, as block 47 indicates.
  • IMG_MD, PREV_MD, and BG_MD used in the routine as block 47 indicates.
  • I have used an initialization value of eight for all three of these variables, but this value is not at all critical.
  • Fig. 4 departs from the usual flowchart convention by depicting two parts of the routine as operating in parallel.
  • One part, which will be described below, begins with decision block 48, while the other part, which is employed to establish the average activity level (i.e., variation) PREV_MD in low-frequency regions, begins with another decision block 50.
  • PREV_MD is updated only at the end of each scan line.
  • a temporary value BG_MD is adjusted by the part of the Fig. 4 routine that starts with block 50.
  • block 50 employs as a criterion the equality of M a and M b , i.e., of the larger- and smaller-window averages. If the pixel under consideration meets this criterion, then BG_MD is adjusted up or down slightly toward the activity level M d within that particular pixel's larger window. Specifically, if a step represented by decision block 50 determines that BG_MD exceeds M d , then BG_MD is decreased by 1/100 in a step represented by block 54; that is, step 54 must be reached one hundred times to change BG_MD by one unit at the pixel-value resolution. If BG_MD does not exceed M d , it is increased by 1/100 in a step represented by block 56.
  • BG_MD Such an adjustment of BG_MD occurs for every pixel that meets the criterion of block 50.
  • the average low-frequency-region activity-level value PREV_MD is updated to equal BG_MD, as block 58 indicates.
  • This PREV_MD value is used in the first step, represented by block 48, of the other parallel routine. Specifically, the decision step of block 48 compares the current pixel's activity level M d with the low-frequency-region activity level PREV_MD. If it is not at least equal to that low-frequency-region activity level, then the value of the variation within the current pixel's window will not be used to affect the activity threshold MD_THR. Otherwise, the temporary-value variable IMG_MD from which the activity-level threshold MD_THR is updated is adjusted slightly up or down toward the variation within the current pixel's window, as blocks 60, 62, and 66 indicate.
  • the activity threshold MD_THR is adjusted in accordance with the equation of block 70.
  • the activity threshold MD_THR is essentially adjusted half way from its previous value to the current value of IMG_MD at the end of each line. (The "bias” represented by the "+1" in the equation is used only to overcome an artifact of the limited-precision arithmetic used in the calculation.)
  • HCHAR is a store of the highest-valued "interesting" pixel encountered so far in the current scan line. Since HCHAR ratchets upward, it also ends up within the signal range even though the routine reaches steps 78 and 80 regardless of the outcome of test 72.
  • the routine branches on a determination, represented by block 82, of whether the last pixel in a line has been reached. If not, the routine loops back to perform the previous steps for the next pixel. At the end of a scan line, however, the routine proceeds to the part depicted in Fig. 3B.
  • the routine branches at a decision represented by block 84 to step 86, in which two intermediate parameters PREV_HCHAR and PREV_LCHAR are set to HCHAR and LCHAR, respectively. Additionally, BCK_LEVEL is set to the lower of 160 and either BCK_THR, as shown in the drawing, or some user-defined constant added to or multiplied by BCK_LEVEL in a more-elaborate embodiment.
  • the routine then proceeds to step 88, in which it computes the other two quantization-threshold-determining parameters IMG and UPPER_THR. Specifically, IMG is equal to the lower of 191 and the average of PREV_HCHAR and PREV_LCHAR. Similarly, UPPER_THR is set equal to either 222 or BCK_LEVEL + IMG, whichever is lower.
  • the routine resets a toggle flag in step 90 for purposes that will be explained below and proceeds to the step represented by block 92, in which all thresholds are set in accordance with the approach previously described. These thresholds are then employed for the next two scan lines of pixel values in the output of the adaptive filter 24. At the same time, the values of HCHAR and LCHAR are reset to their initialization values of 128 and 200, respectively.
  • the negative result of the block-84 test directs the routine to the step represented by block 94, in which the previously mentioned toggle flag is tested.
  • the thresholds are updated only after every other line, and the flag indicates whether the current line is an odd line or an even line. If the value of the flag is zero, it is set to one in step 96, and the routine returns to the steps of Fig. 3A to process the first pixel in the next line. If the value of flag already was one, however, the routine updates BCK_LEVEL, PREV_HCHAR, and PREV_LCHAR in steps that Fig. 3B represent as occurring in parallel.
  • blocks 98, 100, and 102 represent incrementing or decrementing the quantization-threshold-determining parameter BCK_LEVEL by one in accordance with whether the intermediate parameter BCK_THR is greater or less than the previous BCK_LEVEL value.
  • Blocks 104, 106, and 108 represent performing similar updating of PREV_HCHAR in accordance with the value of HCHAR, and blocks 110, 112, and 114 represent a similar adjustment of PREV_LCHAR in accordance with the value of LCHAR.
  • the two other quantization-threshold-determining parameters IMG and UPPER_THR are then computed in step 88, the flag is reset in step 90, and the thresholds are recalculated in step 92 as before.
  • the routine then returns as before to begin processing the next scan line.
  • the PREV_HCHAR and PREV_LCHAR values vary slowly from scan line to scan line, tending toward the average high and low M b values of those pixels identified by the variation and M a -M b criteria as being "interesting.”
  • the IMG value which is the arithmetic mean of these values, is then used to "center" the quantization thresholds, in the manner described above, over the values thus identified as being of most interest. In this way, the illustrated embodiment adaptively concentrates its available resolution in the areas of most interest.
  • the illustrated embodiment is merely one example of the manner in which the present invention can be employed.
  • the BCK_THR updating of steps 36, 38, 40, 42, and 44 could be based on the small-window mean M b rather than on the large-window mean M a ; in fact, it could be based on the values of individual pixels.
  • a bias of the type imposed by step 34 is used, such a bias is not required, and, when it is used, values differing from the one that I use should also produce good results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Input (AREA)
  • Image Processing (AREA)
EP19920115296 1991-09-10 1992-09-07 Verfahren und Gerät zur Grautonquantizierung. Withdrawn EP0531923A3 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US75710791A 1991-09-10 1991-09-10
US757107 1991-09-10

Publications (2)

Publication Number Publication Date
EP0531923A2 true EP0531923A2 (de) 1993-03-17
EP0531923A3 EP0531923A3 (de) 1994-10-12

Family

ID=25046381

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19920115296 Withdrawn EP0531923A3 (de) 1991-09-10 1992-09-07 Verfahren und Gerät zur Grautonquantizierung.

Country Status (2)

Country Link
EP (1) EP0531923A3 (de)
JP (1) JPH05227438A (de)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0630158A1 (de) * 1993-06-17 1994-12-21 Sony Corporation Kodierung von analogen Bildsignalen
CN110891010A (zh) * 2018-09-05 2020-03-17 百度在线网络技术(北京)有限公司 用于发送信息的方法和装置
CN116366411A (zh) * 2023-03-28 2023-06-30 扬州宇安电子科技有限公司 一种多比特信号量化自适应门限生成及量化方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0070161A2 (de) * 1981-07-09 1983-01-19 Xerox Corporation Vorrichtung und Verfahren für die adaptive Bestimmung einer Schwellenhöhe
US4578715A (en) * 1983-02-14 1986-03-25 Ricoh Company, Ltd. Picture signal quantizing circuit
JPS62237589A (ja) * 1986-04-09 1987-10-17 Fuji Electric Co Ltd 2値化装置
US4903316A (en) * 1986-05-16 1990-02-20 Fuji Electric Co., Ltd. Binarizing apparatus
EP0468730A2 (de) * 1990-07-23 1992-01-29 Nippon Sheet Glass Co., Ltd. Verfahren zur binären Quantisierung von Bildern

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0070161A2 (de) * 1981-07-09 1983-01-19 Xerox Corporation Vorrichtung und Verfahren für die adaptive Bestimmung einer Schwellenhöhe
US4578715A (en) * 1983-02-14 1986-03-25 Ricoh Company, Ltd. Picture signal quantizing circuit
JPS62237589A (ja) * 1986-04-09 1987-10-17 Fuji Electric Co Ltd 2値化装置
US4903316A (en) * 1986-05-16 1990-02-20 Fuji Electric Co., Ltd. Binarizing apparatus
EP0468730A2 (de) * 1990-07-23 1992-01-29 Nippon Sheet Glass Co., Ltd. Verfahren zur binären Quantisierung von Bildern

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 12, no. 104 (P-685) 6 April 1988 & JP-A-62 237 589 (FUJI ELECTRIC CO LTD) 17 October 1987 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0630158A1 (de) * 1993-06-17 1994-12-21 Sony Corporation Kodierung von analogen Bildsignalen
US5610998A (en) * 1993-06-17 1997-03-11 Sony Corporation Apparatus for effecting A/D conversion on image signal
CN110891010A (zh) * 2018-09-05 2020-03-17 百度在线网络技术(北京)有限公司 用于发送信息的方法和装置
CN110891010B (zh) * 2018-09-05 2022-09-16 百度在线网络技术(北京)有限公司 用于发送信息的方法和装置
CN116366411A (zh) * 2023-03-28 2023-06-30 扬州宇安电子科技有限公司 一种多比特信号量化自适应门限生成及量化方法
CN116366411B (zh) * 2023-03-28 2024-03-08 扬州宇安电子科技有限公司 一种多比特信号量化自适应门限生成及量化方法

Also Published As

Publication number Publication date
JPH05227438A (ja) 1993-09-03
EP0531923A3 (de) 1994-10-12

Similar Documents

Publication Publication Date Title
EP0531904A2 (de) Verfahren und Vorrichtung zur räumlich veränderlicher Filterung
EP0414505B1 (de) Algorithmus zur Fehlerverbreitung mit Randverstärkung und Verfahren für Bildkodierung.
US5243444A (en) Image processing system and method with improved reconstruction of continuous tone images from halftone images including those without a screen structure
JP3686439B2 (ja) デジタル・イメージのフォト領域検出システム、及び方法
JP3640979B2 (ja) 線形フィルタリングと統計的平滑化とを用いた逆ハーフトーン化方法
AU635676B2 (en) Image processing apparatus
US6118547A (en) Image processing method and apparatus
US5832132A (en) Image processing using neural network
US5870505A (en) Method and apparatus for pixel level luminance adjustment
US5623558A (en) Restoration of images with undefined pixel values
US4850029A (en) Adaptive threshold circuit for image processing
EP0525359B1 (de) System zur Bildverarbeitung
US5438633A (en) Method and apparatus for gray-level quantization
US6044179A (en) Document image thresholding using foreground and background clustering
JP3031994B2 (ja) 画像処理装置
EP0531923A2 (de) Verfahren und Gerät zur Grautonquantizierung
JPH0669212B2 (ja) 適応擬似中間調化回路
JPH0951431A (ja) 画像処理装置
JP2617986B2 (ja) 画像処理方法
JP2860039B2 (ja) 擬似中間調画像縮小装置
JPS62239667A (ja) 画像処理装置
KR930005131B1 (ko) 히스토그램 평활화법을 이용한 중간조 화상 추출 방법
JP2601156B2 (ja) 画像処理装置
KR0150164B1 (ko) 화상 처리장치에 있어서 오차 확산 방법에 의한 양자화 방법 및 장치
JP2757868B2 (ja) 画像情報の2値化処理回路

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB

17P Request for examination filed

Effective date: 19950331

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 19960322