US20020012475A1 - Image segmentation apparatus and method - Google Patents

Image segmentation apparatus and method Download PDF

Info

Publication number
US20020012475A1
US20020012475A1 US09/158,788 US15878898A US2002012475A1 US 20020012475 A1 US20020012475 A1 US 20020012475A1 US 15878898 A US15878898 A US 15878898A US 2002012475 A1 US2002012475 A1 US 2002012475A1
Authority
US
United States
Prior art keywords
image data
neighborhood
window
pixel
peak
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/158,788
Other versions
US6389164B2 (en
Inventor
Xing Li
Michael E. Meyers
Francis K. Tse
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xerox Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US09/158,788 priority Critical patent/US6389164B2/en
Assigned to XEROX CORPORATION reassignment XEROX CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, XING, MEYERS, MICHAEL E., TSE, FRANCIS K.
Publication of US20020012475A1 publication Critical patent/US20020012475A1/en
Application granted granted Critical
Publication of US6389164B2 publication Critical patent/US6389164B2/en
Assigned to BANK ONE, NA, AS ADMINISTRATIVE AGENT reassignment BANK ONE, NA, AS ADMINISTRATIVE AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XEROX CORPORATION
Assigned to JPMORGAN CHASE BANK, AS COLLATERAL AGENT reassignment JPMORGAN CHASE BANK, AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: XEROX CORPORATION
Anticipated expiration legal-status Critical
Assigned to XEROX CORPORATION reassignment XEROX CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A. AS SUCCESSOR-IN-INTEREST ADMINISTRATIVE AGENT AND COLLATERAL AGENT TO JPMORGAN CHASE BANK
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document

Definitions

  • This invention relates to an image processing method and system. More particularly, this invention classifies input image pixels into different classifications prior to output.
  • a document In digital reproduction of documents such as in the digital copier environment, a document is first optically scanned and converted to a gray scale image. In the case of color reproduction, the document may be converted to a gray scale image of several separations, such as the R, G and B separations.
  • the image In order to produce a hard copy of the scanned and digitized image, the image has to be further processed according to the requirements of the marking engine. For example, if the marking engine is capable of bi-level printing, then the image has to be rendered into a 1-bit bit map for printing. To preserve the appearance of a gray scale image in a binary output, often some digital halftoning process is used in which the multi-bit input image is screened with a periodic array.
  • the present invention provides a method and apparatus for classifying image data.
  • a video peak/valley counter may count one of peaks and valleys within a window of the input image data.
  • a local roughness device may determine a local roughness of the input image data.
  • the input image data may be classified based on the count of the video peak/valley counter device and the local roughness of the local roughness detector.
  • a neighborhood average gray value may be determined for the input image data.
  • a pixel under consideration may be evaluated to determine if it is a peak or valley based on whether its brightness is greater or less than a peak threshold value or valley threshold value, which are based on the neighborhood average gray value.
  • a peak/valley detection device may determine one of a peak and a valley count within a window of the image data around a pixel under consideration.
  • a neighborhood checking device may check whether any video peaks or valleys are located within a neighborhood of the pixel under consideration.
  • a halftone dot count of a window may be determined. If the determined halftone dot count is less than a predetermined number, then a neighborhood of the pixel under consideration is checked for any peaks and valleys. The data is then classified based on the number of peaks and valleys if there are any peaks or valleys within the neighborhood.
  • pixels within a window may be evaluated to determine respective peaks and valleys. Each of the pixels within the window may be evaluated unless any pixel within a neighborhood of a desired pixel has previously been classified as a peak or valley.
  • a processing device may determine a peak or valley within a window of the image data.
  • the window may include a neighborhood of pixels about a specified pixel.
  • the processing device may determine the peaks and valleys within the window unless a pixel within the neighborhood has been determined to be a peak or valley.
  • FIG. 1 shows one example of a video matrix
  • FIG. 2 shows one embodiment of the present invention
  • FIG. 3 shows a two-dimensional look-up table in accordance with the present invention
  • FIG. 4 shows eight patterns
  • FIGS. 5A and 5B show examples of a video context window
  • FIG. 6 shows one example of a neighborhood for a pixel under consideration
  • FIG. 7 shows another example of a neighborhood for a pixel under consideration
  • FIGS. 8A and 8B show video peaks and valleys in a 24 ⁇ 8 window
  • FIG. 9 shows a plot of threshold and video average in accordance with the present invention.
  • segmentation may classify an image, on a per pixel basis, into one of several possible classifications.
  • input video pixels may be classified, on a pixel-by-pixel basis, into one of 32 different image types.
  • This classification may be known as tags or effect pointers.
  • the tags may be used by downstream image processing to specify different filtering, rendering and other operations based on the classification.
  • the present invention preferably accomplishes this by looking at a 5 ⁇ 5 (fast scan by slow scan) pixel context and determining various characteristics such as the presence and magnitudes of edges (horizontal or vertical), average value of video, minimum and maximum values of video, etc.
  • 5 ⁇ 5 fast scan by slow scan
  • This matrix is shown in FIG. 1 and can be viewed as a “window” that slides across and down the input image.
  • the center pixel, V 22 is the pixel being processed/classified.
  • the pixels may be generally referenced as V ij where i is the slow scan index and j is the fast scan index.
  • a video matrix of 5 ⁇ 5 is used herein as a preferred example; however, video matrices other than 5 ⁇ 5 are also within the scope of this invention.
  • the shift array may develop the video context matrix as the input image moves through the processing architecture.
  • the outputs of this module may be fed into the various modules that need all or some of this context.
  • the data is preferably stored in a buffer or buffer-like device prior to and during preprocessing. Accordingly, as soon as the value of pixel V 44 is available, the classification of pixel V 22 can be started.
  • a shift array may be used that brings in the current scan line and the four previous scan lines that have been stored in the scan line buffers.
  • FIG. 2 shows a possible architecture for the image segmentation apparatus and method according to the present invention.
  • processing may occur on a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit elements, an ASIC or other integrated circuit, a hardwired electronic or logic circuit such as a discrete element circuit, a programmable logic device such as a PLD, PLA, FPGA or PAL, or the like.
  • specific algorithms may be accomplished using software in combination with specific hardware.
  • two major features that may be extracted for segmentation in accordance with an embodiment of the present invention are video peak/valley count within a window containing the pixel being classified and the local roughness.
  • Local roughness may represent the degree of gray level discontinuity computed as a combination of some gradient operators.
  • One example of local roughness is the difference between the maximum and minimum of nine 3 ⁇ 3 window sums within a 5 ⁇ 5 video context.
  • Other methods of determining the local roughness are also within the scope of the present invention.
  • a pixel may be considered as a video peak or valley if its gray level is the highest or lowest in the neighborhood and the gray level difference between the pixel and the neighborhood average is greater than a certain threshold. Other methods of determining video peaks and/or valleys are also within the scope of the present invention.
  • Several lines of peak and valley patterns may be recorded in scan line buffers for computing peak/valley count within a defined window.
  • peak/valley count and local roughness may be used as indices to form a two-dimensional look-up table (hereafter also called a classification table) as a basis to classify data.
  • the video data may be mapped to certain classes such as low frequency halftone, high frequency halftone, smooth continuous tone, rough continuous tone, edge, text on halftone, etc.
  • the input data may be processed differently.
  • the two-dimensional look-up table allows for flexibility in processing and rendering decision making, which in turn makes it possible to use a smaller video context and intermediate results buffer in the segmentation, and at the same time improve the image quality in areas such as stochastic screens, line screens, etc.
  • the look-up table (i.e., classification table) may be complemented with some special classes.
  • One of them is the “edge class”. It tries to identify some line art and kanji area that could be missed by the look-up table.
  • Another special class is the “white class”. It makes use of the absolute gray level information in addition to peak/valley count and roughness.
  • the “default class” shown in FIG. 2 may be used for the borders of an image.
  • the classification look-up table output may be multiplexed with the special classes to produce the final classification of a pixel (i.e., class output).
  • the classification table assignment may be programmable, which allows for more flexibility in rendering adjustment.
  • major features that may be extracted for segmentation include local roughness and video peak/valley count.
  • the local roughness may be the difference between the maximum and minimum of a nine 3 ⁇ 3 window sum within a 5 ⁇ 5 video context, which extracts edge and text information effectively in most cases.
  • the roughness detector are not sensitive to some line art and kanji text patterns and are prone to false detection of video peak/valley.
  • pattern detection may be introduced to complement other parts of the segmentation algorithm. This technique looks at the pattern of pixels across a scan line (or matrix) in order to classify the data.
  • FIG. 4 shows eight examples of patterns which may be used in the segmentation algorithm of the present invention.
  • ⁇ and ⁇ denote transitions required across one line of the 5 ⁇ 5 video context while represents that a certain threshold has to be met.
  • These patterns are best defined according to certain parameters that must be met to classify the pixels as corresponding to a pattern. One embodiment of these parameters are described below. One skilled in the art would understand how these rule-based parameters are implemented in the algorithm of the present invention.
  • V ij represents the pixel at the ith row and the jth column with the row being in the fast scan direction and the column being in the slow scan direction.
  • updownFs[j] is assigned the value 1; on the other hand, if the following conditions are met
  • updownfs[j] is assigned the value 2. Otherwise, updownFs[j] will be neither 1 nor 2.
  • Patterns 1 - 8 shown in FIG. 4 are then classified as corresponding to a specific pattern based on several variables. In a preferred embodiment, this pattern classification is as follows:
  • Pattern 4
  • Pattern 8
  • FIGS. 5A and 5B show examples of a 5 ⁇ 5 video context window with the pixel values shown.
  • the numbers (200, 50 etc.) are gray levels of the pixels.
  • FIG. 5A corresponds to Pattern 1 shown in FIG. 4 while FIG. 5B shows an example of a video context window corresponding to Pattern 3 .
  • Patterns in the fast scan direction may also be detected in accordance with the present invention.
  • a white class may be desirable that makes use of the absolute gray value of the pixel as well.
  • three features may be used in detecting white class, namely, the brightness, the roughness and the halftone dot count.
  • Prior segmentation schemes rely only on peak/valley count and background threshold to determine white class.
  • including both roughness and halftone dot count to detect a white class adds flexibility to the algorithm without a significant cost increase.
  • the gray level of the pixel under classification should be greater than a predetermined value.
  • the predetermined value may be a default number or may be obtained by computing the lead edge histogram of the image and detecting the peak at the light end of the histogram. The histogram of the whole page may also be used if it is available.
  • the local roughness of the pixel should also be below a certain threshold. As discussed above, the local roughness may be the difference between the maximum and minimum of the nine 3 ⁇ 3 window sums within a 5 ⁇ 5 video context.
  • the halftone dot count within a defined neighborhood of the pixel should be small enough.
  • low-frequency halftone images are often processed and rendered differently than other types of pictorials such as high-frequency halftones, continuous tones, etc.
  • high-frequency images may be converted to continuous tone images using a low-pass filter and then re-screened for printing.
  • Low-frequency halftones are often rendered with error diffusion.
  • Stochastic screen originals and line screen originals are some typical examples.
  • some part of high-frequency halftones may be misclassified as low-frequency image areas due to the missing peaks/valleys in some local areas. False detection of low-frequency halftone may result in severe artifacts.
  • One embodiment of the present invention uses a neighborhood checking mechanism to reduce the false detection of low-frequency halftones.
  • the halftone dot within a window i.e., 24 columns by 8 lines
  • the entries of the look-up table are then mapped to certain number of classes. Neighborhood checking may be performed when the halftone dot count within the window is smaller than a programmable parameter. The algorithm checks a defined neighborhood of any peak/valley within the window.
  • FIG. 6 shows one example of the shaded pixels forming the neighborhood of the black pixel. If there are peaks/valleys in the neighborhood of a peak/valley, then instead of using the original halftone dot count, a special index may be given to the pixel under detection.
  • This neighborhood checking ensures that unless the video peaks/valleys within a window are some distance apart, the pixel will not be considered as a low-frequency halftone. That is, if there are closely located peaks/valleys within a window, then the pixel is not part of a low-frequency halftone but rather may be a high-frequency halftone.
  • FIG. 7 shows a similar embodiment in which the black pixel is under detection and the shaded pixels are neighboring pixels of interest.
  • the pixel under detection will not be considered as a peak or a valley if any of the neighboring pixels of interest (i.e., the shaded pixels) are peaks or valleys.
  • the logic is easy to implement in hardware. The logic guarantees that if a pixel is detected as a peak or valley, then none of its eight immediate neighbors will be peaks or valleys.
  • a triple window may be used to determine peak/valley counts and the halftone dot count average associated with the pixel under classification. For example, the peak/valley information in a window of 24 columns by 8 lines around a pixel is examined.
  • FIG. 8A shows the 24 ⁇ 8 window divided into three smaller (8 ⁇ 8) windows. In each of the 8 ⁇ 8 windows, the greater of the peak and valley counts is chosen to represent the halftone dot count of that window. This improves the accuracy of counting in the area with major gray level changes.
  • a set of rules may be used to determine the final halftone dot count associated with the pixel under classification.
  • the halftone dot counts of the three windows are considered dotLeft, dotMid and dotRight respectively, and dotCount is the final halftone dot count.
  • LOWCOUNT is a programmable parameter.
  • FIG. 8A shows an example of video peak pattern in a 24 ⁇ 8 window
  • FIG. 8B shows an example of video valley pattern in a 24 ⁇ 8 window.
  • the dotLeft, dotMid and dotRight are 4, 4 and 5 respectively. If LOWCOUNT is set to be 3, then dotCount, which reflects two thirds of the halftone dot count within the 24 ⁇ 8 window, is 9. This type of rule-based calculation generally works better than simple averaging in transitional areas.
  • the gray difference between a peak or valley and its neighbors is greatest in mid-tone areas as compared to highlight or shadow areas.
  • one embodiment of the present invention links the threshold for peak/valley detection to the neighborhood average gray value so as to reduce misclassification.
  • a pixel may be considered as a video peak/valley if its gray level is the highest/lowest in the neighborhood and also, the gray level difference between the pixel and the neighborhood average is greater than a certain threshold.
  • the qualifying conditions of peak/valley and the definition of neighborhood may vary. If the threshold for peak/valley detection is set too high, then some halftone dots in the highlight or shadow area may be missed. On the other hand, if the threshold is set too low, then some potential noise or non-halftone video gray level variation could be falsely identified as halftone dots.
  • the threshold for peak/valley detection may be tied to the neighborhood average gray value.
  • the threshold-video average correlation may be established through statistical analysis.
  • the implementation may be a look-up table or some simple formula.
  • a 16-entry table may be used to achieve similar results.
  • video averaging is used to reduce the possibility of misclassifying peaks and valleys.
  • Halftone dots when present in the form of video peaks, generally occur in areas with relatively low average gray value. The converse is true for halftone dots in the form of video valley.
  • the threshold for peak detection may be set to be greater than the threshold for valley detection by some margin, making sure that halftone dots will not be missed.
  • the video average could be the average gray level of a 5 ⁇ 5 window.

Abstract

The present invention provides a method and apparatus for classifying image data. A video peak/valley counter may count peaks and valleys within a window of the input image data. The apparatus and method checks whether any peaks or valleys are located within a neighborhood of a pixel under consideration.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of Invention [0001]
  • This invention relates to an image processing method and system. More particularly, this invention classifies input image pixels into different classifications prior to output. [0002]
  • 2. Description of Related Art [0003]
  • In digital reproduction of documents such as in the digital copier environment, a document is first optically scanned and converted to a gray scale image. In the case of color reproduction, the document may be converted to a gray scale image of several separations, such as the R, G and B separations. In order to produce a hard copy of the scanned and digitized image, the image has to be further processed according to the requirements of the marking engine. For example, if the marking engine is capable of bi-level printing, then the image has to be rendered into a 1-bit bit map for printing. To preserve the appearance of a gray scale image in a binary output, often some digital halftoning process is used in which the multi-bit input image is screened with a periodic array. However, if the original image itself contains halftone screen, objectionable moire patterns may occur due to the interference between the original and the new screens. Also, while dot screen halftoning may be good for rendering continuous tone originals, it may degrade the quality of text and line drawings. Often a document contains different types of images. In order to achieve optimal image quality in document reproduction, a system capable of automatically identifying different types of images within a page is needed. For example, if an image part is identified as halftone, then some kind of low-pass filtering may be applied prior to halftone screening so the gray scale appearance can be preserved without introducing moiré patterns. For text area, some sharpness enhancement filter could be applied and other rendering techniques such as thresholding or error diffusion could be used. [0004]
  • Early work on image segmentation for the purpose of document reproduction dates back to the 1970s. U.S. Pat. No. 4,194,221, the subject matter of which is incorporated herein by reference, discloses a method for automatic multimode reproduction. It employs autocorrelation in halftone detection. Since then, a lot of work has been published in the area of image segmentation. See, for example, U.S. Pat. No. 4,740,843, the subject matter of which is incorporated herein by reference, discloses the method of halftone image detection by measuring the distance between successive gray level maxima. U.S. Pat. No. 5,341,277, the subject matter of which is incorporated herein by reference, discloses a dot image discrimination method that counts density change points within an area. One disadvantage that is common to the existing image segmentation systems is the rigidity of the system structure. Usually the system only provides several programmable parameters used for thresholds in detecting video maximum/minimum, halftone dot counting, etc. It does not provide much flexibility to support processing/rendering optimization and to cope with requirement change. There are other shortcomings in the existing segmentation systems that are related to using fixed threshold in halftone dot detection, using simple average in halftone dot counting, etc., which could result in misclassification in certain area. [0005]
  • SUMMARY OF THE INVENTION
  • The present invention provides a method and apparatus for classifying image data. In one embodiment, a video peak/valley counter may count one of peaks and valleys within a window of the input image data. A local roughness device may determine a local roughness of the input image data. In one embodiment, the input image data may be classified based on the count of the video peak/valley counter device and the local roughness of the local roughness detector. [0006]
  • In one embodiment, a neighborhood average gray value may be determined for the input image data. A pixel under consideration may be evaluated to determine if it is a peak or valley based on whether its brightness is greater or less than a peak threshold value or valley threshold value, which are based on the neighborhood average gray value. [0007]
  • In one embodiment, a peak/valley detection device may determine one of a peak and a valley count within a window of the image data around a pixel under consideration. A neighborhood checking device may check whether any video peaks or valleys are located within a neighborhood of the pixel under consideration. [0008]
  • In one embodiment, a halftone dot count of a window may be determined. If the determined halftone dot count is less than a predetermined number, then a neighborhood of the pixel under consideration is checked for any peaks and valleys. The data is then classified based on the number of peaks and valleys if there are any peaks or valleys within the neighborhood. [0009]
  • In one embodiment, pixels within a window may be evaluated to determine respective peaks and valleys. Each of the pixels within the window may be evaluated unless any pixel within a neighborhood of a desired pixel has previously been classified as a peak or valley. [0010]
  • In one embodiment, a processing device may determine a peak or valley within a window of the image data. The window may include a neighborhood of pixels about a specified pixel. The processing device may determine the peaks and valleys within the window unless a pixel within the neighborhood has been determined to be a peak or valley.[0011]
  • Other objects, advantages and salient features of the invention will become apparent from the following detailed description taken in conjunction with the annexed drawings which disclose preferred embodiments of the invention. [0012]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention will be described with reference to the following drawings in which like reference numerals refer to like elements and wherein: [0013]
  • FIG. 1 shows one example of a video matrix; [0014]
  • FIG. 2 shows one embodiment of the present invention; [0015]
  • FIG. 3 shows a two-dimensional look-up table in accordance with the present invention; [0016]
  • FIG. 4 shows eight patterns; [0017]
  • FIGS. 5A and 5B show examples of a video context window; [0018]
  • FIG. 6 shows one example of a neighborhood for a pixel under consideration; [0019]
  • FIG. 7 shows another example of a neighborhood for a pixel under consideration; [0020]
  • FIGS. 8A and 8B show video peaks and valleys in a 24×8 window; and [0021]
  • FIG. 9 shows a plot of threshold and video average in accordance with the present invention. [0022]
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • In accordance with one embodiment of the present invention, segmentation may classify an image, on a per pixel basis, into one of several possible classifications. For example, input video pixels may be classified, on a pixel-by-pixel basis, into one of 32 different image types. This classification may be known as tags or effect pointers. The tags may be used by downstream image processing to specify different filtering, rendering and other operations based on the classification. [0023]
  • The present invention preferably accomplishes this by looking at a 5×5 (fast scan by slow scan) pixel context and determining various characteristics such as the presence and magnitudes of edges (horizontal or vertical), average value of video, minimum and maximum values of video, etc. Many of the functions in the segmentation process are easily defined in terms of the 5×5 video matrix. This matrix is shown in FIG. 1 and can be viewed as a “window” that slides across and down the input image. The center pixel, V[0024] 22, is the pixel being processed/classified. The pixels may be generally referenced as Vij where i is the slow scan index and j is the fast scan index. A video matrix of 5×5 is used herein as a preferred example; however, video matrices other than 5×5 are also within the scope of this invention.
  • The shift array may develop the video context matrix as the input image moves through the processing architecture. The outputs of this module may be fed into the various modules that need all or some of this context. The data is preferably stored in a buffer or buffer-like device prior to and during preprocessing. Accordingly, as soon as the value of pixel V[0025] 44 is available, the classification of pixel V22 can be started. A shift array may be used that brings in the current scan line and the four previous scan lines that have been stored in the scan line buffers.
  • FIG. 2 shows a possible architecture for the image segmentation apparatus and method according to the present invention. One skilled in the art would understand that processing may occur on a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit elements, an ASIC or other integrated circuit, a hardwired electronic or logic circuit such as a discrete element circuit, a programmable logic device such as a PLD, PLA, FPGA or PAL, or the like. Furthermore, specific algorithms may be accomplished using software in combination with specific hardware. [0026]
  • As shown in FIG. 2, two major features that may be extracted for segmentation in accordance with an embodiment of the present invention are video peak/valley count within a window containing the pixel being classified and the local roughness. Local roughness may represent the degree of gray level discontinuity computed as a combination of some gradient operators. One example of local roughness is the difference between the maximum and minimum of nine 3×3 window sums within a 5×5 video context. Other methods of determining the local roughness are also within the scope of the present invention. On the other hand, a pixel may be considered as a video peak or valley if its gray level is the highest or lowest in the neighborhood and the gray level difference between the pixel and the neighborhood average is greater than a certain threshold. Other methods of determining video peaks and/or valleys are also within the scope of the present invention. [0027]
  • Several lines of peak and valley patterns may be recorded in scan line buffers for computing peak/valley count within a defined window. In accordance with one embodiment, peak/valley count and local roughness may be used as indices to form a two-dimensional look-up table (hereafter also called a classification table) as a basis to classify data. FIG. 3 shows one example of a two-dimensional look-up table that uses five roughness levels and twelve peak/valley count levels. This results in sixty classification table entries (i.e., 5×12=60). Depending on a location within the look-up table, the video data may be mapped to certain classes such as low frequency halftone, high frequency halftone, smooth continuous tone, rough continuous tone, edge, text on halftone, etc. Depending on the class, the input data may be processed differently. [0028]
  • The two-dimensional look-up table allows for flexibility in processing and rendering decision making, which in turn makes it possible to use a smaller video context and intermediate results buffer in the segmentation, and at the same time improve the image quality in areas such as stochastic screens, line screens, etc. [0029]
  • The look-up table (i.e., classification table) may be complemented with some special classes. One of them is the “edge class”. It tries to identify some line art and kanji area that could be missed by the look-up table. Another special class is the “white class”. It makes use of the absolute gray level information in addition to peak/valley count and roughness. The “default class” shown in FIG. 2 may be used for the borders of an image. The classification look-up table output may be multiplexed with the special classes to produce the final classification of a pixel (i.e., class output). The classification table assignment may be programmable, which allows for more flexibility in rendering adjustment. [0030]
  • As described above with respect to one embodiment, major features that may be extracted for segmentation include local roughness and video peak/valley count. The local roughness may be the difference between the maximum and minimum of a nine 3×3 window sum within a 5×5 video context, which extracts edge and text information effectively in most cases. However, the roughness detector are not sensitive to some line art and kanji text patterns and are prone to false detection of video peak/valley. To limit these problems, in one embodiment pattern detection may be introduced to complement other parts of the segmentation algorithm. This technique looks at the pattern of pixels across a scan line (or matrix) in order to classify the data. FIG. 4 shows eight examples of patterns which may be used in the segmentation algorithm of the present invention. In this figure, ↑ and ↓ denote transitions required across one line of the 5×5 video context while [0031]
    Figure US20020012475A1-20020131-P00900
    represents that a certain threshold has to be met. These patterns are best defined according to certain parameters that must be met to classify the pixels as corresponding to a pattern. One embodiment of these parameters are described below. One skilled in the art would understand how these rule-based parameters are implemented in the algorithm of the present invention.
  • V[0032] ij represents the pixel at the ith row and the jth column with the row being in the fast scan direction and the column being in the slow scan direction.
  • Next, sumSS[j] is defined as the sum of the five pixels in the jth column, or [0033] sumSs [ j ] = i = 0 4 V i , j o < = j < = 4.
    Figure US20020012475A1-20020131-M00001
  • Another variable array is updownFs[j], 0<=j<=3. This is used to signal transitions. The assignment is determined as following: if the following conditions are met [0034]
  • i) V[0035] ij<=Vij+1 for all i
  • ii) sumSs[j]<sumSs[j+1] [0036]
  • then updownFs[j] is assigned the [0037] value 1; on the other hand, if the following conditions are met
  • i) V[0038] ij>=V ij+1 for all i
  • ii) sumSs[j]>sumSs[j+1] [0039]
  • then updownfs[j] is assigned the [0040] value 2. Otherwise, updownFs[j] will be neither 1 nor 2.
  • Patterns [0041] 1-8 shown in FIG. 4 are then classified as corresponding to a specific pattern based on several variables. In a preferred embodiment, this pattern classification is as follows:
  • [0042] Pattern 1.
  • i) updownFs[0]=2 [0043]
  • ii) updownFs[2]=1 [0044]
  • iii) min(sumSs[0],sumSs[3]−min(sumSs[2],sumSs[2])>SUMDIF1 [0045]
  • [0046] Pattern 2.
  • i) updownFs[1]=2 [0047]
  • ii) updownF3[3]=1 [0048]
  • iii) min(sumSs[1],sumSs[4])−min(sumSs[2],sumSs[3])>SUMDIF1 [0049]
  • [0050] Pattern 3.
  • i) updownFs[0]=1 [0051]
  • ii) updownFs[1]=1 [0052]
  • iii) updownFs[2]=2 [0053]
  • iv) updownFs[3]=2 [0054]
  • v) sumSs[2]−sumSs[0]>SUMDIF2 [0055]
  • vi) sumSs[2]−sumSs[4]>SUMDIF2 [0056]
  • [0057] Pattern 4.
  • i) sumSs[3]−sumSs[2]>SUMDIF3 [0058]
  • ii) sumSs[2]−sumSs[1]>SUMDIF3 [0059]
  • [0060] Pattern 5.
  • i) updownFs[1]=1 [0061]
  • ii) updownFs[2]=2 [0062]
  • iii) max(sumSs[1],sumSs[2])−max(sumSs[0],sumSs[3])>SUMDIF4 [0063]
  • [0064] Pattern 6.
  • i) updownFs[1]=1 [0065]
  • ii) updownFs[3]=[0066] 2
  • iii) max(sumSs[2],sumSs[3])−max(sumSs[1],sumSs[4])>SUMDIF4 [0067]
  • [0068] Pattern 7.
  • i) updownFs[0]=2 [0069]
  • ii) updownFs[1]=2 [0070]
  • iii) updownFs[2]=1 [0071]
  • iv) updownFs[3]=1 [0072]
  • v) sumSs[0]−sumSs[2]>SUMDIF5 [0073]
  • vi) sumSs[4]−sumSs[3]>SUMDIF5 [0074]
  • [0075] Pattern 8.
  • i) sumSs[1]−sumSs[2]>SUMDIF6 [0076]
  • ii) sumSs[2]−sumSs[3]>SUMDIF6 [0077]
  • FIGS. 5A and 5B show examples of a 5×5 video context window with the pixel values shown. The numbers (200, 50 etc.) are gray levels of the pixels. By using such a pattern detector, then text areas can be detected that would not be detected by a roughness detector. As can be seen, FIG. 5A corresponds to [0078] Pattern 1 shown in FIG. 4 while FIG. 5B shows an example of a video context window corresponding to Pattern 3.
  • The patterns described above identify certain gray level transitions in the fast scan direction. Patterns in the slow scan direction may also be detected in accordance with the present invention. [0079]
  • In processing and rendering of background areas, a white class may be desirable that makes use of the absolute gray value of the pixel as well. In accordance with one embodiment of the present invention three features may be used in detecting white class, namely, the brightness, the roughness and the halftone dot count. Prior segmentation schemes rely only on peak/valley count and background threshold to determine white class. However, including both roughness and halftone dot count to detect a white class adds flexibility to the algorithm without a significant cost increase. [0080]
  • In order to qualify as white, first the gray level of the pixel under classification should be greater than a predetermined value. The predetermined value may be a default number or may be obtained by computing the lead edge histogram of the image and detecting the peak at the light end of the histogram. The histogram of the whole page may also be used if it is available. Second, the local roughness of the pixel should also be below a certain threshold. As discussed above, the local roughness may be the difference between the maximum and minimum of the nine 3×3 window sums within a 5×5 video context. Third, the halftone dot count within a defined neighborhood of the pixel should be small enough. [0081]
  • As is well known to one skilled in the art, low-frequency halftone images are often processed and rendered differently than other types of pictorials such as high-frequency halftones, continuous tones, etc. For example, high-frequency images may be converted to continuous tone images using a low-pass filter and then re-screened for printing. Low-frequency halftones, on the other hand, are often rendered with error diffusion. There are many circumstances in which a non low-frequency area could be classified as a low-frequency halftone if the video peak/valley count is the only criterion. Stochastic screen originals and line screen originals are some typical examples. Also, some part of high-frequency halftones may be misclassified as low-frequency image areas due to the missing peaks/valleys in some local areas. False detection of low-frequency halftone may result in severe artifacts. One embodiment of the present invention uses a neighborhood checking mechanism to reduce the false detection of low-frequency halftones. [0082]
  • As discussed above, with respect to one embodiment, the halftone dot within a window (i.e., 24 columns by 8 lines) and the local roughness may be used as indices to form the look-up table. The entries of the look-up table are then mapped to certain number of classes. Neighborhood checking may be performed when the halftone dot count within the window is smaller than a programmable parameter. The algorithm checks a defined neighborhood of any peak/valley within the window. FIG. 6 shows one example of the shaded pixels forming the neighborhood of the black pixel. If there are peaks/valleys in the neighborhood of a peak/valley, then instead of using the original halftone dot count, a special index may be given to the pixel under detection. This neighborhood checking ensures that unless the video peaks/valleys within a window are some distance apart, the pixel will not be considered as a low-frequency halftone. That is, if there are closely located peaks/valleys within a window, then the pixel is not part of a low-frequency halftone but rather may be a high-frequency halftone. [0083]
  • FIG. 7 shows a similar embodiment in which the black pixel is under detection and the shaded pixels are neighboring pixels of interest. Using appropriate logic such as hardware, the pixel under detection will not be considered as a peak or a valley if any of the neighboring pixels of interest (i.e., the shaded pixels) are peaks or valleys. In other words, since by the time the pixel under detection is being examined and the neighboring pixels of interest have all been detected, the logic is easy to implement in hardware. The logic guarantees that if a pixel is detected as a peak or valley, then none of its eight immediate neighbors will be peaks or valleys. [0084]
  • In one embodiment of the present invention, a triple window may be used to determine peak/valley counts and the halftone dot count average associated with the pixel under classification. For example, the peak/valley information in a window of 24 columns by 8 lines around a pixel is examined. FIG. 8A shows the 24×8 window divided into three smaller (8×8) windows. In each of the 8×8 windows, the greater of the peak and valley counts is chosen to represent the halftone dot count of that window. This improves the accuracy of counting in the area with major gray level changes. [0085]
  • Given the halftone dot counts of the three 8×8 windows, a set of rules may be used to determine the final halftone dot count associated with the pixel under classification. The halftone dot counts of the three windows are considered dotLeft, dotMid and dotRight respectively, and dotCount is the final halftone dot count. The rules can be described by the following C-like statement, [0086]
    if (dotLeft > dotMid && dotMid < dotRight)
    {
     dotCount = min(dotLeft,dotRight)*2;
    }
    else if (dotLeft < dotMid && dotMid > dotRight)
    {
     if (min(dotLeft,dotRight) == LOW COUNT)
     {
      dotCount = dotMid + dotRight;
     }
     else
     {
      dotCount = max(dotMid,dotright)*2;
     }
    }
    else
    {
     if (dotMid >= LOWCOUNT)
     {
      dotCount = dotMid + max(dotLeft,dotRight);
     }
     else
     {
      dotCount = dotMid + min(dotLeft,dotRight);
     }
    }
  • where LOWCOUNT is a programmable parameter. [0087]
  • Using the above rules instead of simple averaging improves the halftone dot count in areas of peak/valley misdetection or where peak/valley density transitions occur. [0088]
  • More specifically, FIG. 8A shows an example of video peak pattern in a 24×8 window and FIG. 8B shows an example of video valley pattern in a 24×8 window. The dotLeft, dotMid and dotRight are 4, 4 and 5 respectively. If LOWCOUNT is set to be 3, then dotCount, which reflects two thirds of the halftone dot count within the 24×8 window, is 9. This type of rule-based calculation generally works better than simple averaging in transitional areas. [0089]
  • The gray difference between a peak or valley and its neighbors is greatest in mid-tone areas as compared to highlight or shadow areas. Thus, one embodiment of the present invention links the threshold for peak/valley detection to the neighborhood average gray value so as to reduce misclassification. For example, a pixel may be considered as a video peak/valley if its gray level is the highest/lowest in the neighborhood and also, the gray level difference between the pixel and the neighborhood average is greater than a certain threshold. The qualifying conditions of peak/valley and the definition of neighborhood may vary. If the threshold for peak/valley detection is set too high, then some halftone dots in the highlight or shadow area may be missed. On the other hand, if the threshold is set too low, then some potential noise or non-halftone video gray level variation could be falsely identified as halftone dots. To reduce misclassification, the threshold for peak/valley detection may be tied to the neighborhood average gray value. [0090]
  • The threshold-video average correlation may be established through statistical analysis. The implementation may be a look-up table or some simple formula. For example, the Threshold-Video_Average plot shown in FIG. 9 may be represented by the following equation, [0091] Threshold = C1 - Video_Average / 16 for Video_Average > MT C2 + Video_Average 2 / 2048 for Video_Average < = MT
    Figure US20020012475A1-20020131-M00002
  • with C1=21, C2=5 and MT=128. [0092]
  • A 16-entry table may be used to achieve similar results. [0093]
  • An implementation of this approach has been tested with the video average calculated as the average of the eight immediate neighbors of the pixel under detection for peak/valley. [0094]
  • In at least one embodiment of the present invention, video averaging is used to reduce the possibility of misclassifying peaks and valleys. [0095]
  • Halftone dots, when present in the form of video peaks, generally occur in areas with relatively low average gray value. The converse is true for halftone dots in the form of video valley. By limiting peak detection to the area where the video average is below a certain threshold and limiting valley detection to the area where the video average is above certain threshold, some false detection can be prevented. The threshold for peak detection may be set to be greater than the threshold for valley detection by some margin, making sure that halftone dots will not be missed. The video average could be the average gray level of a 5×5 window. [0096]
  • While the invention has been described in relation to preferred embodiments, many modifications and variations are apparent from the description of the invention, and all such modifications and variations are intended to be within the scope of the present invention as defined in the appended claims. [0097]

Claims (12)

What is claimed is:
1. An apparatus for classifying image data comprising:
an input device that receives the image data;
a peak/valley detection device that receives the image data and determines one of a peak count and a valley count within a window of the image data around a pixel under consideration, wherein the peak/valley detection device includes a neighborhood checking device that checks whether any peaks or valleys are located within a neighborhood of the pixel under consideration; and
an output device that outputs a signal corresponding to a classification of the pixel under consideration based on the determination of the peak/valley detection device.
2. The apparatus of claim 1, further comprising a local roughness device that determines a local roughness of the input image data, and the output device outputs the signal also based on the local roughness determined by the local roughness detector.
3. The apparatus of claim 1, further comprising a halftone detector for determining a halftone dot count of the window of the image data.
4. The apparatus of claim 3, wherein the neighborhood checking device checks whether any peaks or valleys are located within the neighborhood after the halftone dot detector determines the halftone dot count is less than a predetermined number.
5. A method of classifying input image data comprising the steps of:
receiving the image data;
determining a halftone dot count of a window of the image data including a pixel under consideration;
if the determined halftone dot count is less than a predetermined number, then checking a neighborhood of the pixel under consideration for any peaks and valleys; and
classifying the data based on the number of peaks and valleys within the neighborhood.
6. The method of claim 5, further comprising the step of determining the local roughness of a window of the image data.
7. A method of classifying input image data comprising the steps of:
receiving the image data;
evaluating pixels within a window of the image data, wherein each of the pixels within the window is evaluated unless any pixel within a neighborhood of a desired pixel has been classified as a peak or valley; and
classifying pixels as peaks or valleys at least based on the evaluation step.
8. The method of claim 7, further comprising the step of processing the image data based on the classification of the image data.
9. The method of claim 7, wherein the classifying step is further based on a gray level difference between a pixel under consideration and a neighborhood average gray value.
10. An apparatus for classifying image data comprising:
an image input device that receives the input data; and
a processing device that determines one of peaks and valleys within a window of the image data, wherein the window includes a neighborhood of pixels about a specified pixel, the processing device determining the one of peaks and valleys within the window unless a pixel within the neighborhood about the specified pixel has been determined to be a peak or valley.
11. The apparatus of claim 10, further comprising a classification device that classifies the image data at least based on the number of peaks and valleys determined by the processing device.
12. The apparatus of claim 11, further comprising an output device that processes the image data based on a classification output from the classification device.
US09/158,788 1998-09-23 1998-09-23 Image segmentation apparatus and method Expired - Lifetime US6389164B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/158,788 US6389164B2 (en) 1998-09-23 1998-09-23 Image segmentation apparatus and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/158,788 US6389164B2 (en) 1998-09-23 1998-09-23 Image segmentation apparatus and method

Publications (2)

Publication Number Publication Date
US20020012475A1 true US20020012475A1 (en) 2002-01-31
US6389164B2 US6389164B2 (en) 2002-05-14

Family

ID=22569714

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/158,788 Expired - Lifetime US6389164B2 (en) 1998-09-23 1998-09-23 Image segmentation apparatus and method

Country Status (1)

Country Link
US (1) US6389164B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040027614A1 (en) * 2002-08-09 2004-02-12 Xerox Corporation System for identifying low-frequency halftone screens in image data
US20050265600A1 (en) * 2004-06-01 2005-12-01 Xerox Corporation Systems and methods for adjusting pixel classification using background detection
US9508165B1 (en) * 2015-06-30 2016-11-29 General Electric Company Systems and methods for peak tracking and gain adjustment
US9734603B2 (en) 2015-06-30 2017-08-15 General Electric Company Systems and methods for peak tracking and gain adjustment
US20190004777A1 (en) * 2015-04-23 2019-01-03 Google Llc Compiler for translating between a virtual image processor instruction set architecture (isa) and target hardware having a two-dimensional shift array structure
CN117351008A (en) * 2023-12-04 2024-01-05 深圳市阿龙电子有限公司 Smart phone panel surface defect detection method

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001082097A (en) * 1999-09-13 2001-03-27 Toshiba Corp Tunnel ventilation control device
US20060239555A1 (en) * 2005-04-25 2006-10-26 Destiny Technology Corporation System and method for differentiating pictures and texts
US7792359B2 (en) * 2006-03-02 2010-09-07 Sharp Laboratories Of America, Inc. Methods and systems for detecting regions in digital images
US7889932B2 (en) * 2006-03-02 2011-02-15 Sharp Laboratories Of America, Inc. Methods and systems for detecting regions in digital images
US8630498B2 (en) * 2006-03-02 2014-01-14 Sharp Laboratories Of America, Inc. Methods and systems for detecting pictorial regions in digital images
US7864365B2 (en) * 2006-06-15 2011-01-04 Sharp Laboratories Of America, Inc. Methods and systems for segmenting a digital image into regions
US8437054B2 (en) * 2006-06-15 2013-05-07 Sharp Laboratories Of America, Inc. Methods and systems for identifying regions of substantially uniform color in a digital image
US7876959B2 (en) * 2006-09-06 2011-01-25 Sharp Laboratories Of America, Inc. Methods and systems for identifying text in digital images
US20090041344A1 (en) * 2007-08-08 2009-02-12 Richard John Campbell Methods and Systems for Determining a Background Color in a Digital Image
US8014596B2 (en) * 2007-10-30 2011-09-06 Sharp Laboratories Of America, Inc. Methods and systems for background color extrapolation
US8111918B2 (en) * 2008-10-20 2012-02-07 Xerox Corporation Segmentation for three-layer mixed raster content images
US9715624B1 (en) * 2016-03-29 2017-07-25 Konica Minolta Laboratory U.S.A., Inc. Document image segmentation based on pixel classification

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4194221A (en) 1978-12-26 1980-03-18 Xerox Corporation Automatic multimode continuous halftone line copy reproduction
NL8503558A (en) 1985-12-24 1987-07-16 Oce Nederland Bv METHOD AND APPARATUS FOR RECOGNIZING SEMI-TONE IMAGE INFORMATION.
GB2224906B (en) * 1988-10-21 1993-05-19 Ricoh Kk Dot region discriminating method
US5754708A (en) * 1994-11-16 1998-05-19 Mita Industrial Co. Ltd. Dotted image area detecting apparatus and dotted image area detecting method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040027614A1 (en) * 2002-08-09 2004-02-12 Xerox Corporation System for identifying low-frequency halftone screens in image data
US7280253B2 (en) * 2002-08-09 2007-10-09 Xerox Corporation System for identifying low-frequency halftone screens in image data
US20050265600A1 (en) * 2004-06-01 2005-12-01 Xerox Corporation Systems and methods for adjusting pixel classification using background detection
US20190004777A1 (en) * 2015-04-23 2019-01-03 Google Llc Compiler for translating between a virtual image processor instruction set architecture (isa) and target hardware having a two-dimensional shift array structure
US10599407B2 (en) * 2015-04-23 2020-03-24 Google Llc Compiler for translating between a virtual image processor instruction set architecture (ISA) and target hardware having a two-dimensional shift array structure
US11182138B2 (en) 2015-04-23 2021-11-23 Google Llc Compiler for translating between a virtual image processor instruction set architecture (ISA) and target hardware having a two-dimensional shift array structure
US9508165B1 (en) * 2015-06-30 2016-11-29 General Electric Company Systems and methods for peak tracking and gain adjustment
US9734603B2 (en) 2015-06-30 2017-08-15 General Electric Company Systems and methods for peak tracking and gain adjustment
CN117351008A (en) * 2023-12-04 2024-01-05 深圳市阿龙电子有限公司 Smart phone panel surface defect detection method

Also Published As

Publication number Publication date
US6389164B2 (en) 2002-05-14

Similar Documents

Publication Publication Date Title
US6178260B1 (en) Image segmentation apparatus and method
US6782129B1 (en) Image segmentation apparatus and method
US6360009B2 (en) Image segmentation apparatus and method
US6389164B2 (en) Image segmentation apparatus and method
EP0521662B1 (en) Improved automatic image segmentation
JP4396324B2 (en) Method for determining halftone area and computer program
US6137907A (en) Method and apparatus for pixel-level override of halftone detection within classification blocks to reduce rectangular artifacts
JP4633907B2 (en) Method and apparatus for masking scanning artifacts in image data representing a document and image data processing system
US5852678A (en) Detection and rendering of text in tinted areas
US6674899B2 (en) Automatic background detection of scanned documents
US7876959B2 (en) Methods and systems for identifying text in digital images
EP0712094A2 (en) A multi-windowing technique for threshholding an image using local image properties
US6272240B1 (en) Image segmentation apparatus and method
US7411699B2 (en) Method and apparatus to enhance digital image quality
US7746503B2 (en) Method of and device for image enhancement
EP0500174B1 (en) Image processing method and scan/print system for performing the method
US6529629B2 (en) Image segmentation apparatus and method
US6185335B1 (en) Method and apparatus for image classification and halftone detection
US7280253B2 (en) System for identifying low-frequency halftone screens in image data
US7251059B2 (en) System for distinguishing line patterns from halftone screens in image data
US6411735B1 (en) Method and apparatus for distinguishing between noisy continuous tone document types and other document types to maintain reliable image segmentation
JP3453939B2 (en) Image processing device
JP3251119B2 (en) Image processing device
US20050265600A1 (en) Systems and methods for adjusting pixel classification using background detection
KR19990011500A (en) Local binarization method of the imaging system

Legal Events

Date Code Title Description
AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, XING;MEYERS, MICHAEL E.;TSE, FRANCIS K.;REEL/FRAME:009480/0570

Effective date: 19980923

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: BANK ONE, NA, AS ADMINISTRATIVE AGENT, ILLINOIS

Free format text: SECURITY INTEREST;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:013153/0001

Effective date: 20020621

AS Assignment

Owner name: JPMORGAN CHASE BANK, AS COLLATERAL AGENT, TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:015134/0476

Effective date: 20030625

Owner name: JPMORGAN CHASE BANK, AS COLLATERAL AGENT,TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:015134/0476

Effective date: 20030625

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. AS SUCCESSOR-IN-INTEREST ADMINISTRATIVE AGENT AND COLLATERAL AGENT TO JPMORGAN CHASE BANK;REEL/FRAME:066728/0193

Effective date: 20220822