JP4878572B2 - Image processing apparatus and image processing method - Google Patents

Image processing apparatus and image processing method Download PDF

Info

Publication number
JP4878572B2
JP4878572B2 JP2007093551A JP2007093551A JP4878572B2 JP 4878572 B2 JP4878572 B2 JP 4878572B2 JP 2007093551 A JP2007093551 A JP 2007093551A JP 2007093551 A JP2007093551 A JP 2007093551A JP 4878572 B2 JP4878572 B2 JP 4878572B2
Authority
JP
Japan
Prior art keywords
image
processing
modulation
pixel
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2007093551A
Other languages
Japanese (ja)
Other versions
JP2008252699A (en
Inventor
亜由美 堀
史博 後藤
文孝 後藤
雄介 橋井
徹哉 諏訪
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to JP2007093551A priority Critical patent/JP4878572B2/en
Publication of JP2008252699A publication Critical patent/JP2008252699A/en
Application granted granted Critical
Publication of JP4878572B2 publication Critical patent/JP4878572B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6072Colour correction or control adapting to different types of images, e.g. characters, graphs, black and white image portions

Description

  The present invention relates to an image processing apparatus and an image processing method, and more particularly to application or application of smoothing processing to an image.

  Recently, multifunction peripherals (hereinafter also referred to as IJMFP) having an inkjet printing type recording mechanism and a scanner mechanism have been widely provided. The IJMFP is a printer that can be used for various purposes such as a function for connecting to a personal computer (PC) to perform printing and scanning, a copying function for a single device, and a function for performing direct printing by connecting to a digital camera. It is. From such a point, for example, it is also used as a home copying machine. The copy function in the IJMFP reads an original image with a scanner and records the read image on a recording medium such as paper.

  In such copying, in general, the color reproduction range differs depending on the type of document to be copied. For this reason, it may be difficult to obtain a visually matching color between the original and the copy output. In addition, the gradation that is reproduced may vary depending on the type of document.

  On the other hand, Patent Document 1 proposes a technique using image area separation. That is, this document describes a method in which a read image is separated into at least halftone dots and a photograph area, optimal γ conversion is performed on each area, and good images are obtained in all areas. Patent Document 2 also describes a method of obtaining a good image in all areas by separating a read image in the same manner into a character area and a photographic area, and performing optimal color space conversion on each area. Has been.

  In addition, when copying to plain paper with IJMFP, the color reproduction range of plain paper is narrower than that of originals such as printed matter and silver halide photographs. Therefore, depending on the color compression method, pseudo contour and gradation loss may occur. Tonal deterioration may occur. On the other hand, Patent Document 3 describes a method of detecting a character edge amount and adding a random number according to the amount to suppress the continuous contour pseudo contour without impairing the sharpness of the character. ing. Similarly, in Patent Document 4, a partial area to which random number data is added is set in multi-value color image data, and gradation skip correction is performed by adding random number data only to the set area. This region is described as maintaining the original image.

JP 2001-251513 A JP 2002-218271 A JP-A-10-155087 JP 2001-144943 A

  As described above, there is a problem that the color reproduction range of an input image such as an original in the case of copying differs from the color reproduction range of the recording apparatus, and color compression technology is generally used for this. However, when the color reproduction range is relatively narrow as in the case of using plain paper in the inkjet method, it is difficult to perform optimum color reproduction for both characters and photographic images with only one color compression method. For example, if color compression with a high contrast is performed so that characters are clearly displayed, the gradation of high density and high saturation may be lost when the same color compression is performed on a photograph. On the other hand, if color compression is performed with emphasis on gradation so that the gradation of a photographic image is not impaired, black characters and the like are recorded thinner when the same color compression is performed on characters. Specifically, low-density characters such as pencils, so-called solid black characters, and black characters output on plain paper by the ink jet method are recorded thinner, and a clear character output cannot be obtained.

  In order to achieve both the recording of these characters and photographic images, Patent Document 1 and Patent Document 2 have been proposed as described above. However, when a plurality of color compression tables or gamma processing tables are switched for each image, there is a problem that a memory area for holding a table suitable for each character and photograph is required.

  Further, Patent Document 3 and Patent Document 4 described above have been proposed as methods for changing processing without increasing the table in accordance with the image area. However, the techniques described in these documents improve the gradation reproducibility by adding noise to the portion where the pseudo contour is generated, and erase the pseudo contour that is originally generated. Therefore, the control of reducing the effect of the processing is not performed in consideration of the influence of the image conversion performed later.

  An object of the present invention is to provide an image processing apparatus and an image processing method capable of achieving both good reproducibility of characters and photographic images without increasing the number of processes for each image area such as a table.

In the present invention Therefore, based on the pixel values of the image data is defined in a predetermined color space, component representing the image, if within a predetermined range in the color space, of the component based on the pixel value An image processing apparatus that executes image processing including processing for changing a value, and adding the modulation amount of the component to the component based on a pixel value in image data, the component is out of the predetermined range. by so doing, the pixel value modulation means for modulating the processed pixel value as that the pixel is reduced machining by, characterized in that comprises a.

Further, based on the pixel values of the image data is defined in a predetermined color space, component representing the image, if within a predetermined range in the color space, to change the value of the component based on the pixel value An image processing method for performing image processing including processing , wherein a component modulation amount is added to the component based on a pixel value in image data so that the component is out of the predetermined range. by, characterized in that a pixel value modulation process, for modulating the pixel value as the processing that the pixel of the processing by the processing is reduced.

  According to the above configuration, the pixel value is modulated so that the number of pixels to be processed by processing such as blackening and saturation enhancement is reduced. As a result, even when a photographic image or the like is the target of the above-described processing, the effect of the processing can be weakened, and a recording result without gradation collapse can be obtained in the photographic image. On the other hand, for characters / line drawings such as characters and ruled lines, a clear recording result can be obtained by processing.

  As a result, it is possible to achieve both good reproducibility of characters and photographic images without increasing the number of processes for each image area such as a table.

  Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.

State MFP device Figure 1 (a) and (b) is, the multifunction printer according to an embodiment of the present invention (hereinafter, MFP device) opened the first platen lid respectively also as a perspective view and an auto document feeder FIG. The MFP apparatus 1 has a function as a normal PC printer that receives and records data from a host computer (PC) and a function as a scanner. Furthermore, a copy function for recording an image read by a scanner with a printer, a function for directly reading and recording image data stored in a storage medium such as a memory card, or a function for receiving and recording image data from a digital camera It has.

  The MFP apparatus 1 includes a reading device 34 using a scanner including a CCD sensor. This reading device reads a document directly placed on a document table or a document supplied by an auto document feeder (hereinafter referred to as ADF) 31. The recording device 33 is an ink jet type device and performs recording on a recording medium such as paper using four colors of ink of cyan (C), magenta (M), yellow (Y), and black (K).

  The MFP apparatus 1 further includes an operation panel 35 including a display panel 39 and various key switches. Further, a USB port (not shown) for communicating with the PC is provided on the back of the MFP apparatus 1. Furthermore, a card slot 42 for reading data from various memory cards and a camera port 43 for data communication with a digital camera are provided. The recording method of the recording apparatus is not limited to the ink jet method in applying the present invention. For example, other types such as an electrophotographic method may be used.

  FIG. 2 is a block diagram showing a configuration for executing control and image processing of the MFP 1 shown in FIGS. 1 (a) and 1 (b).

  In FIG. 2, the CPU 11 controls various functions of the MFP 1 and executes an image processing program stored in the ROM 16 according to a predetermined operation in the operation unit 15 having the operation panel 35. A reading unit 14 having a reading device 34 reads a document image and outputs analog luminance data of red (R), green (G), and blue (B) colors. Note that the reading unit 14 may include a contact image sensor (CIS) instead of the CCD.

  The card interface 22 having the card slot 42 reads image data captured by, for example, a digital still camera (DSC) and recorded on a memory card according to a predetermined operation of the operation unit 15. The color space of the image data read via the card interface 22 is changed from a DSC color space (for example, YCbCr) to a standard RGB color space (for example, NTSC-RGB or sRGB) by the image processing unit 12 as necessary. Converted. In addition, the read image data is subjected to various processes necessary for the application, such as resolution conversion to an effective number of pixels, as necessary, based on the header information. A camera interface 23 having a camera port 43 is directly connected to the DSC and reads image data.

  The image processing unit 12 performs image processing such as input device color conversion, image correction / processing, output device color conversion, color separation, and quantization, which will be described later with reference to FIG. The recording data obtained thereby is stored in the RAM 17. When the recording data stored in the RAM 17 reaches a predetermined amount required for recording by the recording unit 13 having the recording device 33, the recording operation by the recording unit 13 is executed.

  The nonvolatile RAM 18 is constituted by a battery-backed SRAM or the like, and stores data unique to the image processing apparatus. Further, the operation unit 15 is provided with a photo direct print start key, a key for printing an order sheet, and a key for reading an order sheet in order to select image data stored in the storage medium and start recording. In addition, a copy start key for monochrome copying and color copying, a mode key for specifying a mode such as copy resolution and image quality, a stop key for stopping a copy operation, a numeric keypad for entering the number of copies, a registration key, etc. Is provided. The CPU 11 detects the pressed state of these keys and controls each part according to the state.

  The display unit 19 includes a display panel 39 (FIG. 1A). That is, the display unit includes a dot matrix type liquid crystal display unit (LCD) and an LCD driver, and performs various displays based on the control of the CPU 11. Also, thumbnails of the image data recorded on the storage medium are displayed. The recording unit 13 having the recording device 33 is configured by an ink jet recording head, a general-purpose IC, and the like, and reads recording data stored in the RAM 17 under the control of the CPU 11 and records it as a hard copy.

  The drive unit 21 includes a stepping motor and a DC motor for driving the paper supply / discharge roller, a gear for transmitting a driving force of the stepping motor and the DC motor, and a stepping in each operation of the reading unit 14 and the recording unit 13 described above. It is composed of a driver circuit for controlling a motor and a DC motor. The sensor unit 20 includes a recording paper width sensor, a recording paper presence sensor, a document width sensor, a document presence sensor, a recording medium detection sensor, and the like. The CPU 11 detects the state of the original and the recording paper based on information obtained from these sensors.

  The PC interface 24 is an interface between the PC and the MFP apparatus 1, and the MFP apparatus receives a recording operation or reading instruction from the PC via the PC interface 24.

  In the above configuration, during the copying operation, the image processing unit 12 performs predetermined image processing on the image data read by the reading device 34, and the recording device 33 records based on the result data.

Image Processing FIG. 3 is a flowchart showing image processing executed at the time of copying in the MFP of the present embodiment.

  In FIG. 3, first, in step 501, shading correction is performed on the data read by the reading unit 14 and AD-converted, thereby correcting variations in the image sensor. Next, in step 502, input device color conversion is performed. As a result, the image signal data in the device-specific color space is converted into signal data in the standard color space that does not depend on the device. As this standard color space, sRGB defined by IEC (International Electrotechnical Commission), AdobeRGB proposed by Adobe Systems, etc. are known. In this embodiment, this conversion is performed using a lookup table. Note that a matrix calculation method can also be used as the conversion method.

  The converted data is subjected to correction / processing in step 503. As processing contents, there are edge enhancement processing for correcting so-called image blur caused by reading, character processing processing for improving character legibility, processing for removing show-through caused by reading by light irradiation, and the like. Further, pixel value modulation processing and pixel value processing processing according to the embodiment of the present invention, which will be described in detail later with reference to FIG. 9, are also performed as this correction / processing processing.

  In step 504, image enlargement / reduction processing is executed. When scaling is designated by the user, or when the original copy is an allocation copy in which two originals are assigned to a single sheet, it is converted to the desired magnification. As a conversion method, methods such as bicubic and nearest neighbor are common.

  Next, in step 505, the image signal data in the standard color space is converted into signal data unique to the recording apparatus as the output device. As will be described later, this conversion is conversion (gamut mapping color conversion) by gamut mapping (gamut mapping).

  Next, in step 506, conversion processing from signal data specific to the output device to ink color data used in the cyan (C), magenta (M), yellow (Y), and black (K) MFP is executed. . For this conversion, the same method as in step 502 can be used. In step 507, the image signal data is converted into the number of levels that can be recorded by the recording device 33. That is, the recording apparatus 33 according to the present embodiment expresses an image with a binary value indicating whether ink is ejected or not. Therefore, it is converted into binary data by a quantization method such as error diffusion.

  Next, the output device color conversion in step 505 will be described in more detail. In the present embodiment, an output device color conversion lookup table is defined as an output profile, which is also referred to as an output color conversion table below.

  The output color conversion table associates a color signal in the sRGB color space, which is a standard color space, with a color signal in the color gamut of the printing apparatus (hereinafter also simply referred to as a printer color gamut). Specifically, this table defines discrete grid points by signal data in the sRGB color space, and associates color signals in the printer color gamut with each grid point.

FIG. 4 is a diagram showing the sRGB color gamut 601 and the printer color gamut 602 based on the sRGB color space signal, which is a standard color space, in the CIE-L * a * b * color system. Hereinafter, it is assumed that the color spaces shown in the drawings of this embodiment are all shown in the CIE-L * a * b * color system. The color system to be handled is not particularly limited to the CIE-L * a * b * color system, and may be a color space similar to the L * u * v * color space.

  As shown in FIG. 4, the sRGB color gamut 601 and the printer color gamut 602 have different shapes, sizes, and the like. Therefore, when creating the color conversion table, a “gamut compression” technique for compressing the color gamut of the standard color space into the printer color gamut is used. The gamut compression used in the present embodiment provides a non-compressed area in the printer color gamut for color reproduction that matches the color of the standard color space colorimetrically, and colors in the color gamut of the other standard color space, A method of compressing to a printer color gamut outside the uncompressed area is adopted. By using the above gamut compression method, color reproduction that matches the color gamut of the standard color space in the non-compressed area can be achieved, and the gradation is maintained outside the non-compressed area. Is possible. For example, when the recording media used for copying are different from each other in the shape of the color gamut, such as photographic paper and matte paper, it is possible to realize the same color reproduction between the two.

FIG. 5 is a diagram for explaining an example of gamut compression used in the output device color conversion (S505) shown in FIG. In FIG. 5, a color gamut 701 and a color gamut 702 are obtained by projecting the color gamut of the sRGB color space and the printer color gamut onto the L * a * plane. A color gamut 703 represents an uncompressed region in which color reproduction is performed in which the colors in the sRGB color space are colorimetrically matched. In this example, the uncompressed area has a shape similar to the printer color gamut and has a size of 80%. Point O indicates the compression convergence point. Points 704 and 708 are colors corresponding to lattice points in the sRGB color space.

  In gamut compression, first, it is determined whether or not such a grid point in the sRGB color space is located in the non-compressed region. This color gamut inside / outside determination processing is performed by the following method. First, the length of a vector (referred to as a source vector) connecting a point to be determined and a compression convergence point is calculated. Then, the distance from the compression convergence point to the intersection of a vector (referred to as a color gamut vector) from the compression convergence point to the determination target point is obtained, and the lengths of the source vector and the color gamut vector are compared. When the length of the source vector is larger than the length of the color gamut vector, the determination target point is determined to be out of the color gamut, and when it is small, it is determined to be within the color gamut.

  As a result of such color gamut inside / outside determination processing, it is determined that the point 708 is located in the non-compressed region. In that case, the compression process is not performed and the same value as the input sRGB value is held. On the other hand, since it is determined that the point 704 is not a color in the uncompressed region, gamut compression is performed by the following method. That is, the point 704 is gamut-compressed into the printer color gamut outside the uncompressed area. Specifically, first, the distance X between the point 704 and the point O is calculated. Further, a straight line connecting point O and point 704, a point 705 where the outline of the color gamut 701 in the sRGB color space intersects, a point 706 where the outermost outline of the printer color gamut 702 intersects, and a point where the outline of the uncompressed area 703 intersects. 707 is searched and the distance to each point is calculated. In FIG. 5, the respective distances are represented by T, D, and F. Based on the relationship between the compression convergence point O and the distance, the point 704 is compressed into the printer color gamut. The compressed point is linearly compressed on a straight line connecting the point O and the point 704 at a distance that can be calculated by the following compression function (1).

  The compression function does not need to be linear as shown in the equation (1). For example, a multi-order function or a similar function that collapses the gradation as the position is out of the color gamut is used. Good. The size of the non-compressed area is about 80% of the printer color gamut, but is not limited thereto. For example, when the size is set to 100%, it is possible to implement a gamut compression method that emphasizes colorimetric matching for the colors in the printer color gamut and crushes colors outside the color gamut.

  Next, black crushing processing and whitening processing according to some embodiments of the present invention, which will be described later with reference to FIG. As a method of using the copy function, it is assumed that the manuscript is copied, and the copied recorded material is further copied again as a manuscript. Even when such copying is repeated, a color with a specific brightness (for example, the brightness of the white spot of the printer color gamut) or more is selected to be white in the printer color gamut (white on the recording paper) in order to achieve good image reproduction Is converted to white (blown white). Similarly, a color having a specific lightness (for example, the lightness of a black spot in the printer color gamut) or less is converted into black in the printer color gamut (black crushing).

FIG. 6 is a diagram for explaining black crushing processing and whitening processing as movement of colors (points) in the color gamut. As described with reference to FIG. 5, the color gamut 801 and the color gamut 802 are obtained by projecting the color gamut of the sRGB color space and the printer color gamut onto the L * a * plane, respectively.

A color gamut 802 is a printer color gamut when recording on a recording medium used at the time of copying. A point 803 indicates a white point in the printer color gamut 802. L Wt indicates the brightness of the white point in the printer color gamut. Here, in the color gamut 801 of the sRGB color space, all the lattice points in the region where the brightness is L Wt or more are converted into points 803. In this way, whitening is performed. On the other hand, the point 804 represents the black point of the printer gamut, also, L Bk represents the brightness of the black point of the printer gamut. By performing the black crushing process, all the grid points having lightness of L Bk or less in the color gamut 801 of the sRGB color space are converted into points 804.

As shown in FIG. 6, for example, if the input document has a color gamut 805, the color represented by the white triangle mark has a lightness higher than L Wt, so that it is reproduced in white on the recording medium used. Is done. Further, the color triangle was filled with black, has low lightness than L Bk, is reproduced as a black all printer gamut. Hereinafter referred to L Bk blacking lightness, the lightness whitening the L WT. Here, although L Bk is a black point of the printer color gamut, it is not limited thereto. For example, a black point may be recognized as a bright color due to an error in reading an original. Considering these, L Bk may be set at a position brighter than the black spot of the printer color gamut. The same applies to L Wt .

Next, the color separation table used in the color separation processing in step 506 in FIG. 3 will be described. When the image signal obtained by the output device color conversion (gamut mapping) in step 505 described above is an RGB signal, the RGB signal in the color gamut and the color specified in the colorimetric space (for example, CIE-L * a *) b * value) has a one-to-one relationship. Therefore, for example, 729 grid points are defined at equal intervals in the color space of the RGB signal. Then, color patch data corresponding to these 729 lattice points is prepared and recorded by the recording device. By measuring the color patch after recording, the color of the grid point represented by the RGB value unique to the printer can be specified as, for example, a color of the CIE-L * a * b * color system. Next, the grid points of the sRGB color space compressed by the processing in step 505 are converted into colors in the CIE-L * a * b * color system, and the color difference is the smallest among the 729 colorimetric values. Find a perfect grid point. Then, by performing an interpolation operation on the grid points around the minimum color difference point, printer RGB values corresponding to the grid points in the sRGB color space are obtained. As described above, it is possible to create a color separation table that describes in which ink color of the recording apparatus the colors in the input color space are output.

Processing Unit FIGS. 7A to 7C are diagrams for explaining the processing unit of the correction / processing performed in step 503 in the image processing shown in FIG. Note that the description of this processing unit is particularly relevant to the fourth and fifth embodiments described later.

  FIG. 7A shows a case where the processing unit is a pixel unit. In the correction / processing, the pixel marked with a circle in FIG. Next, a region (7 × 7 region) composed of 7 × 7 pixels including the target pixel in the center and surrounded by a thick line is set. Then, the pixel value of the target pixel is corrected using the image signal in the set 7 × 7 area.

  After the above processing, for example, a pixel adjacent to the pixel of interest is set as the next pixel of interest, such as a pixel marked with x in FIG. 7B. Then, as described above, the 7 × 7 region is set with the x mark as the target pixel, and the correction / processing is executed. Thereafter, similarly, the target pixel is sequentially moved one pixel at a time, and a 7 × 7 area is set each time, thereby correcting and processing all the pixels to be processed.

  When the processing unit is a region unit, a 7 × 7 region is set for the circled pixel in FIG. 7A, and a plurality of correction strengths set for the circled pixel are set in the 7 × 7 region. Applies to pixels, eg all pixels. Then, by setting a 7 × 7 area for the Δ mark pixel shown in FIG. 7C, the 7 × 7 area for the circle mark pixel and the 7 × 7 area for the Δ mark pixel are adjacent to each other. Move the processing unit to. Note that the correction intensity can be set with higher accuracy when the processing unit is the pixel unit.

  FIG. 8 is a flowchart for explaining the movement of the processing unit. In step 1001, the processing target is set. Immediately after the start of this process, the first process target is set. When returning from step 1005 to step 1001, the next processing target is set. Next, in step 1002, the processing area is set. As described above, the processing area is an area composed of a plurality of pixels (7 × 7 area in the above example) including a processing unit.

  In step 1003, image area separation is performed. The area for the processing unit is determined, and area information is determined. That is, in this determination, it is determined whether the region is an edge region such as a character or a flat region such as a photographic image. Next, in step 1004, processing / correction processing is performed based on the region determination. In step 1005, it is determined whether or not the correction has been completed for all the processing targets. If the correction has not been completed, the processing from step 1001 is repeated.

  In the following, some embodiments of pixel value modulation processing and pixel value processing based on the configuration of the present embodiment described above will be described. Note that the range that the image signal can take is described as an example, but the range of the image signal is not limited to this, and can be appropriately set according to the configuration of the MFP apparatus, image processing, and the like. .

(Embodiment 1)
FIGS. 9A to 9C are flowcharts showing details of the correction / processing (S503) according to the first embodiment of the present invention. In the present embodiment, as shown in FIG. 9A, in response to the black crushing processing being performed as the luminance processing processing (S1300 in FIG. 9A), in the case of a flat image such as a photographic image, the luminance modulation processing ( Step S1200) of FIG. Thereby, it is possible to achieve both good reproducibility of a photographic image and a character image without separately providing a processing configuration such as a table. In the following processing, unless otherwise specified, the calculation result is clipped to 0 when the calculation result is less than 0, and to 255 when the calculation result is 256 or more.

  In FIG. 9A, first, in step 1000, target pixel information is acquired. That is, the RGB signal value and the character attribute value M of the target pixel are acquired. Here, the character attribute value M is a feature amount indicating whether or not the pixel of interest is a pixel (a pixel in the character area) constituting the character / line drawing. When M is 0, the pixel of interest is a pixel constituting a natural image or gradation image (a pixel in a photographic area), and when M is 1, the pixel of interest is determined to be a pixel constituting a character / line image. . For obtaining M, a known technique such as image area separation or pattern matching can be used. Next, in step 1100, image discrimination is performed. That is, the character attribute value M of the target pixel obtained above is determined. If the value of M is 0, that is, if it is a pixel in a photographic area other than a character / line drawing, the process proceeds to step 1200. If the value of M is 1, that is, a pixel in the character / line drawing area, the process proceeds to step 1300.

  If the pixel of interest is in the photographic area, the luminance modulation processing in step 1200 is performed. FIG. 9B is a flowchart showing details of the luminance modulation processing.

In this process, first, in step 1210, the luminance is calculated for each pixel of interest based on the RGB value that is the data. The luminance value Y1 is calculated by the following calculation formula.
Y1 = 0.299 × R + 0.587 × G + 0.114 × B (2)
In this embodiment, the luminance Y calculated by Expression (2) is used, but any other value may be used as long as it represents the brightness component of the input pixel. For example, the value of L in the L * a * b * color space or the Luv color space can also be used. Further, instead of using the brightness, brightness, etc. defined in these color spaces as they are, it is also possible to use what is expressed approximately for the sake of simplicity of calculation.

Next, in step 1220, the luminance modulation amount is calculated. As will be described later, the luminance modulation amount is added to each pixel value as noise, and is generated by this step. The luminance modulation amount dY is calculated by the following calculation formula based on the luminance value Y1 obtained above.
dY = 0 (Y1 ≧ Ys)
dY = Yb × (1−Y1 / Ys) (Y1 <Ys) (3)

  FIG. 10 is a diagram showing the relationship of the above equation (3). In the above equation (3) and FIG. 10, Ys represents a value at which luminance modulation is started in the black crushing process. Yb represents a luminance value that is reduced to luminance 0 by blackening processing. In this embodiment, Yb is a luminance obtained from RGB values read by scanning solid black printed by the MFP apparatus. Thereby, it is possible to obtain a recording result that maintains a so-called solid black density during repeated copying.

  The values of Ys and Yb can be set according to the input / output characteristics of a recording apparatus such as an MFP apparatus that implements the present invention. For example, if the device cannot obtain stable recording density and scan data, Ys and Yb are set higher, or Ys and Yb used in luminance processing described later in Expression (3) are not used as they are, and a margin is taken. good.

  Next, in step 1230, a luminance modulation code is calculated. That is, the modulation code value F for determining whether to perform modulation (+) for adding luminance or modulation (−) for subtraction is calculated for each pixel of interest. In the present embodiment, F = + 1 when the coordinate of the pixel of interest is “even numbers in both x and y coordinates” or “odd in both x and y coordinates”, and F = −1 otherwise. As shown in FIG. 11, in a two-dimensional array of pixels, the modulation amount is determined positively or negatively, that is, luminance addition and subtraction are alternately performed in the column direction and the row direction according to the position of the modulation target pixel.

  In step 1240, the luminance is modulated. That is, the luminance Y ′ (8 bits) after modulation is calculated from the luminance modulation amount dY, the luminance modulation code F, and the luminance value Y1 obtained above by the following calculation formula.

Y ′ = Y1 + dY × F / 255 (4)
Finally, in step 1250, the pixel value after luminance modulation is calculated. That is, the pixel value R′G′B ′ after modulation is calculated from the luminance Y ′ obtained above and the pixel value RGB related to the calculation by the following calculation formula.
R ′ = Y ′ + 1.371 × (0.511 × R−0.428 × G−0.083 × B)
G ′ = Y′−0.698 × (−0.172 × R−0.339 × G−0.336 × B)
−0.336 × (0.511 × R−0.428 × G−0.083 × B)
B ′ = Y ′ + 1.732 × (−0.172 × R−0.339 × G−0.336 × B)
... Formula (5)

  With the above processing, the luminance of the pixel of interest which is an image region such as a photograph is increased or decreased as noise by the amount of modulation corresponding to the luminance value inherent to the pixel.

  FIGS. 12A to 12D are diagrams for explaining how a gradation image of continuous tone is modulated with respect to blackening.

  FIG. 12A shows a gradation image before modulation. A gradation from black to gray is formed from left to right in the figure. On the other hand, when the modulation processing of this embodiment is performed, an image shown in FIG. 12C is obtained. Since the modulation amount is zero in a bright region that is not a black-crush region, the same gray level as shown in FIG. However, regarding the pixels in the blackened area, modulation is performed, and pixels that are brighter and darker than the original pixel values are alternately arranged to form a checkered pattern. Here, a pixel brighter than the original pixel value has a luminance brighter than the luminance Yb that is completely crushed by blackening, and the original gradation can be preserved even if blacking is performed, as will be described later. .

  In the above-described processing, clipping is performed when each signal value is less than 0 or greater than 255. For this reason, it is conceivable that the luminance sum of the entire image changes depending on the clip before and after the modulation. Specifically, there is a case where the luminance that has become negative after modulation in the dark portion is clipped to zero. In particular, if it is not necessary to store the luminance sum, it may be used as it is. However, if it is necessary to store the luminance, the following procedure may be added. That is, the value that is cut off when the luminance is clipped is held, and when the modulation process is performed on other pixels, the value of that amount may be added. As a result, when the apparatus for carrying out the present invention carries out the luminance modulation processing, for example, when performing the correction processing processing for taking a luminance histogram on the image and performing some processing using the average value, etc. According to the invention, the correction processing result can be prevented from being affected. Further, if the purpose is not to preserve luminance, the modulation code value F may be always positive. In that case, an equation for calculating dY may be given so that the magnitude relationship of Y ′ of each pixel maintains the original magnitude relationship of Y before and after modulation.

  Referring again to FIG. 9A, after the luminance modulation processing, in step 1300, luminance processing processing is performed on the target pixel. FIG. 9C is a flowchart showing details of the luminance processing.

  In FIG. 9C, first, in step 1310, the luminance is calculated based on the RGB value after the luminance modulation in step 1200. Specifically, based on the pixel value R′G′B ′ obtained in step 1200, the luminance Y2 is calculated using the following calculation formula.

Y2 = 0.299 × R ′ + 0.587 × G ′ + 0.114 × B ′ (6)
Note that if it is not necessary to retain the modulated pixel value R′G′B ′ as information, step 1240 may be omitted, and Y ′ may be used as it is and Y2 = Y ′.

Next, in step 1320, luminance blackening is performed. That is, based on the luminance Y2 obtained above, the luminance Y ″ by the black crushing process is obtained by the following calculation formula.
Y ″ = f (Y2) (7)
Here, f (y) is a function that changes according to the input luminance y, and in the present embodiment, it is realized using a one-dimensional lookup table shown in FIG. With this black crushing process, a good black image can always be obtained even with repeated copying, and a clear color reproduction of black characters can be performed.

  Note that this look-up table may be used in combination with other luminance processing processes. For example, a so-called background removal process may be performed simultaneously by saturating the high luminance part.

After the luminance processing, in step 1330, a pixel value after luminance blackout processing is calculated. Specifically, the pixel value R ″ G ″ B ″ after the black crushing process is calculated from the Y ″ and RGB values obtained above by the following calculation formula.
R ″ = Y ″ + 1.371 × (0.511 × R−0.428 × G−0.083 × B)
G ″ = Y ″ −0.698 × (−0.172 × R−0.339 × G−0.336 × B)
−0.336 × (0.511 × R−0.428 × G−0.083 × B)
B ″ = Y ″ + 1.732 × (−0.172 × R−0.339 × G−0.336 × B)
... Formula (8)

  According to the processing of the present embodiment described above, among pixels of interest, the pixels that are to be blacked out in the image of the photographic area are “pixels having the luminance to be blacked out” by the luminance modulation processing (S1200). And “pixels with luminance that is not subject to black crushing”. In other words, due to the modulation due to the addition of noise, there is a pixel whose pixel value is originally converted to a pixel having a luminance that is not to be blacked out among pixels to be blacked out (pixels in a predetermined range). As a result, for the “pixels having luminance that is not subject to black crushing” thus converted, the luminance does not change in the black crushing process (S1300). Can exist. Accordingly, even if the black crushing process set for the character / line drawing area is similarly applied to the photographic area (S1300), the effect can be weakened.

  FIG. 12B described above shows an image when the blacking process is performed on the original image shown in FIG. 12A without performing the modulation process of the present embodiment. In this case, the dark gradation is crushed by the black crushing process, and the left few lines become a black image.

  On the other hand, in FIG. 12D, the original image shown in FIG. 12A is subjected to the modulation processing of the present embodiment, so that the black change processing is performed on the original image changed as shown in FIG. The result image is shown. In this image, as described above, since pixels having “brightness not to be crushed” are present at regular intervals even in the dark portion, the gradation by these pixels remains even after crushed black, Crushing as seen in FIG. 12 (b) is avoided.

  In the above embodiment, the pixel value modulation (brightness modulation) and the pixel value processing (brightness processing) are continuously performed. However, as long as the meaning of the given modulation is not impaired, another process is performed between the two processes. Processing may be performed. The same applies to other embodiments that follow.

  In the present embodiment, the character attribute value M is binary information of 0 or 1. However, this may be multi-value information based on character character. For example, M is represented by 8 bits, and when M = 0, it is completely a pixel in the photographic area, when M = 255, it is completely a pixel in the character area, and the signal value between them may be a pixel in the character area. The signal value is determined by the height of. In that case, what is necessary is just to distinguish whether M becomes below an arbitrary threshold value by the character attribute discrimination | determination in step 1100. Further, the modulation amount may be changed according to the value of M. In this case, branching in step 1100 is not necessarily required, and an optimal modulation amount dY corresponding to M may be obtained when calculating the modulation amount in step 1220.

Specifically, what is necessary is just to change the above-mentioned Formula (3) as follows, for example.
dY = 0 (Y1 ≧ Ys)
dY = Yb * (1-Y1 / Ys) * (255-M) (Y1 <Ys)
... Formula (9)

Accordingly, when M is a large value, the possibility of being a pixel in the character area is high, so the modulation amount is set low. When M is a small value, the possibility of being a pixel in the photographic area is high. Can be set high. As described above, in addition to the effects described in the present embodiment, it is possible to make the switching of regions inconspicuous and improve the image quality. The same applies to the other embodiments that follow.

(Embodiment 2)
In the first embodiment described above, the pixel value is modulated by the processing of steps 1220 to 1240. In this method, since the direction of the modulation amount is determined based on the position of the pixel, the pixel is modulated in a checkered pattern in the area where the modulation is performed. Depending on the output resolution of the recording apparatus, this regular change in brightness of the pixels may appear as a pattern in the output, which may impair image quality. Further, when the input image is, for example, by halftone printing, and the spatial frequency characteristic of the image itself interferes with the spatial frequency characteristic of the checkered pattern modulation in the above embodiment, moiré occurs. There is a fear. The present embodiment relates to a method for improving this.

  FIGS. 14A to 14C are flowcharts showing the correction / processing processing according to the second embodiment of the present invention. In the figure, the processing of Steps 2000 to 2210 and Steps 2240 to 2400 is the same as Steps 1000 to 1210 and Steps 1250 to 1400 described above, and therefore description thereof is omitted.

  In the present embodiment, the luminance replacement process (S2200 in FIG. 14A) is performed to modulate the luminance and make the modulation irregular.

In FIG. 14B showing details of the luminance replacement process, step 2220 is a process of calculating a replacement probability. That is, the luminance replacement probability pY is calculated from the luminance Y1 by the following calculation formula.
pY = 0 (Ys ≦ Y1)
pY = (pYmax / (Ys−Yb)) (Ys−Y1) (Yb <Y1 ≦ Ys)
pY = (pYmax / Yb) × Y1 (0 ≦ Y1)
... Formula (10)
Here, pY represents a replacement probability. When 0, the probability is zero, and when 255, the probability is 1. The pYMax is set in consideration of how much the gradation near the black is desired to remain in the photographic gradation. Each value can be set according to the input / output characteristics of the MFP apparatus when this processing is performed. Also, as described in the first embodiment, Ys and Yb set in the black crushing processing can be used. An arbitrary margin may be taken. FIG. 15 is a diagram showing the relationship between pY and Y1.

Next, in step 2230, luminance replacement is performed. Specifically, the luminance Y ′ after the replacement process is calculated from the pY and Y1 obtained above by the following calculation formula. First, a value of 1 to 255 is randomly generated using a random number generator. When the generated value is less than or equal to pY, the luminance of the target pixel is replaced with the following expression.
Y ′ = eY Formula (11)
Here, eY is the replacement luminance, and in this embodiment, eY = Ys.

When the generated value exceeds pY, the luminance Y1 of the target pixel is used as it is.
Y ′ = Y1 Formula (12)

  According to this embodiment described above, the following effects can be obtained in addition to the effects of the first embodiment. That is, by performing modulation irregularly, it is possible to suppress a regular change in brightness of pixels from appearing as a pattern in the output product due to the modulation. Further, it is possible to suppress the occurrence of moire due to interference between the spatial frequency characteristics at the time of modulation and the spatial frequency characteristics of the halftone printed matter in the document.

In this embodiment, the replacement luminance eY is set to Ys. However, eY may be another value as long as it is not a value (0 to Yb) that is processed to zero luminance by the luminance processing in step 2310. For example, by setting a value lower than Ys, the change in pixel value before and after replacement can be reduced, so that the replacement pixel can be made inconspicuous on the image. In that case, the effect of this embodiment can be obtained by adjusting pY and eY together.
Further, the modulation amount itself may be determined by a random number, or the modulation code F may be determined by a random number.

(Embodiment 3)
In the first and second embodiments described above, the case where the black crushing process is performed as the pixel value processing process has been described. However, in the present embodiment, the present invention is applied when performing the saturation enhancement process in the high saturation part. explain.

  Considering the output color design when copying and outputting color characters, it is desirable to make the characters stand out clearly in the high saturation portion. For that purpose, it is effective to enhance the saturation component of the highly saturated pixel by increasing the saturation component. On the other hand, emphasizing saturation in a natural image is effective to improve the appearance of the image, but if extreme saturation is emphasized as in the case of text, the gradation in the high saturation part is lost. End up. To prevent this, use saturation processing that clearly emphasizes color characters in the text and line drawing areas in the document, and saturation processing with enhancement amounts that improve the appearance while maintaining continuous tone in other areas. Need to do. However, it is necessary to provide a processing table for each of them, and when switching and using according to the pixel attribute of the target pixel, it may be difficult to realize in terms of operation speed and memory efficiency.

  The present embodiment relates to a configuration in which color characters are output clearly without switching a plurality of processing tables, and gradation is maintained in a high-saturation region of a natural image.

  FIGS. 16A and 16B are flowcharts showing the correction / processing according to the second embodiment of the present invention. In the figure, steps 3000 to 3100 and step 3400 are the same as steps 1000 to 1100 and step 1400 described above, and thus the description thereof is omitted here.

In the present embodiment, the saturation modulation (S3200 in FIG. 16A) is performed in the same manner as the luminance modulation described above. In FIG. 16B showing the details of the saturation modulation, first, color information is calculated in step 3210. Specifically, the hue H1, the saturation S1, and the brightness V1 are calculated from the pixel value RGB for each target pixel by the following calculation formula. In the following formula, color conversion generally used as RGB → HSV conversion is used. Here, the maximum value of RGB is MaxRGB, and the minimum value is MinRGB.
S1 = (MaxRGB−MinRGB) / MaxRGB
V1 = MaxRGB

H1 is calculated separately depending on which signal value MaxRGB is R, G, or B.
When MaxRGB is an R signal value,
H1 = 60 × (GB) / (MaxRGB−MinRGB) +0
When MaxRGB is a G signal,
H1 = 60 × (BR) / (MaxRGB−MinRGB) +120
When MaxRGB is a B signal,
H1 = 60 × (GB) / (MaxRGB−MinRGB) +240
... Formula (12)

In the above example, H1 and S1 as calculated by Expression (12) are used, but other values may be used as long as they represent the color component and the vividness component of the input pixel. As an example, the hue and saturation in the L * a * b * color space and the Luv color space may be used. Further, instead of using the equations defined by these color spaces as they are, it is also possible to use an approximate expression for simplifying the calculation.

Next, in step 3220, a color modulation amount is calculated. Specifically, the color modulation amount dS is calculated by the following calculation formula from S1 obtained above.
dS = 0 (S <Ss)
dS = (dSMax / (255−Ss)) × (S1−Ss) (S ≧ Ss)
... Formula (13)
Here, dSMax represents the maximum modulation amount, and Ss represents the saturation modulation threshold value. In the present embodiment, in the saturation enhancement performed in the subsequent step, the minimum saturation of a color that is subject to saturation enhancement is obtained for each hue in the hues of RGBCMY colors, and the average value is used as Ss. For dSMMax, a value that matches the balance of tone reproduction and saturation enhancement in a photographic image within a range not exceeding (255−Ss) so that the magnitude relationship between saturations does not invert before and after color modulation. Set.

Each value may be arbitrarily set according to the input / output characteristics of the MFP apparatus when the present invention is implemented. For example, if the apparatus cannot obtain stable print density or scan data, Ss and dSMax are increased. It is possible to set a margin and take a margin. Further, as described in the previous step, it may be set based on the saturation in the L * a * b * color space or the Luv color space. When the saturation enhancement is different for each color, the saturation enhancement amount in the hue corresponding to H obtained in the previous step and the minimum saturation to be subjected to saturation enhancement are obtained, and dSMax, Ss are obtained by H. May be changed.

  FIG. 17 is a diagram showing the relationship between dS and S1.

  Next, in step 3230, the color modulation direction is calculated. That is, a modulation code value F for switching whether to perform modulation for adding saturation or modulation for subtraction is calculated for the target pixel. The modulation code value F is F = + 1 when the coordinate of the pixel of interest is “even in both x and y coordinates” or “odd in both x and y coordinates”, and F = −1 otherwise.

In step 3240, color modulation is performed. Specifically, the saturation S2 after modulation is calculated from S1, F, and dS obtained above.
S2 = S1 + F × dS

Finally, in step 3250, a post-modulation pixel value is calculated. Specifically, the pixel values R2, G2, and B2 after modulation are calculated from H1, V1, and S2 obtained above.
If S2 = 0,
R2 = G2 = B2 = V1

When S2> 0, the following formula is used. In the following formula, color conversion generally used as HSV → RGB conversion is used. Further, “int (x)” represents a value obtained by rounding down the decimal number of x.
i = int (H1 / 60)
f = H1- (i * 60)
p1 = V1 * (1-S2)
p2 = V1 * (1-S2 * f)
p3 = V1 * (1-S2 * (1-f))
R2 = V1, G2 = p3, B2 = p1 (i = 0)
R2 = p2, G2 = V1, B2 = p1 (i = 1)
R2 = p1, G2 = V1, B2 = p3 (i = 2)
R2 = p1, G2 = p2, B2 = V1 (i = 3)
R2 = p3, G2 = p1, B2 = V1 (i = 4)
R2 = V1, G2 = p1, B2 = p2 (i = 5)
Further, as described in the previous step, when modulation is performed based on the saturation in the L * a * b * color space or the Luv color space, the inverse from each color space to RGB is corresponding to this. Conversion may be performed.

  Referring to FIG. 16A again, after the saturation modulation process (S3200) described above, in step 3300, saturation enhancement is performed based on the pixel values R2, G2, and B2 after modulation. Here, saturation enhancement processing with an enhancement amount suitable for a character / line drawing is held as three-dimensional lookup table information, and saturation enhancement is performed by referring to the information.

  In this saturation enhancement, with respect to the pixels of the image to be subjected to saturation enhancement in the image of the photographic region, “pixels that are subject to saturation enhancement” and “pixels that are not subject to saturation enhancement” by the modulation processing of the present embodiment. Has been modulated to either. Therefore, the saturation of the pixel that has become “a pixel that is not subject to saturation enhancement” by the modulation does not change in the saturation enhancement processing. As a result, pixels that are not subjected to saturation enhancement can be present in the photographic image region. As a result, even if the saturation enhancement processing set for the character / line drawing area is similarly applied to the photographic image area, the effect can be weakened.

  According to the present embodiment described above, the following effects are obtained in addition to the effects of the first embodiment. That is, it is possible to output color characters clearly without switching between a plurality of processing tables, and to maintain good gradation in the high saturation region of a natural image.

  In this embodiment, the saturation enhancement amount is controlled as an example, but it is also possible to control the black crushing amount in the first and second embodiments. In that case, the modulation direction can be expressed as a vector combining the brightness direction and the hue direction. That is, depending on the three-dimensional position in the color space determined from the RGB signal value of the pixel of interest, the signal value is given directivity in an appropriate direction and modulated by an appropriate amount. Also good.

  Also, when processing that optimizes the text / line drawing area by processing or color conversion, the pixels in the direction in which the effect of the processing is weakened for the pixels that are identified as photographic areas By performing value modulation, processing can be performed with an intensity suitable for each region.

(Embodiment 4)
In the first to third embodiments described above, the example in which the pixel value is modulated for each target pixel based on the pixel information for each target pixel has been described. However, for example, depending on the accuracy of character / photo determination of the image area separation result, a pixel or the like determined as a character / line drawing area in a photograph may occur as a singular point. Value modulation is not performed.

  In addition, in the case where it is desired to preserve gradation before and after modulation, a method of maintaining clipped luminance has been shown as an application of the first embodiment. However, the following cases can be considered as problems that cannot be solved by this. That is, when the original is an area gradation pixel such as a newspaper advertisement, for example, only by looking at a single pixel, only a portion where a halftone image exists is subjected to pixel value modulation. At this time, if the halftone image is printed with a relatively dark ink, there may be a case where there is not enough luminance remaining to darkly modulate the luminance. If the pixel to be modulated is only an ink portion with a dark halftone dot, this cannot be solved. As a result, the brightness of the entire original cannot be preserved during modulation.

  In addition, when the modulation amount and modulation direction for a single pixel are determined based on random numbers or pixel coordinates, for example, an area composed of a sufficiently large number of pixels, such as the entire page to be recorded, The sum is statistically zero. Then, the sum of signal values such as luminance and saturation of the original image is stored before and after the modulation. However, when the image area to be modulated is composed of a small number of pixels, the probability that the sum of the signal values is similarly saved is lowered.

  The present embodiment relates to a method for improving these, and performs modulation according to the distribution of pixel values in image data.

  18A to 18C are flowcharts showing the processing / correction processing according to the fourth embodiment of the present invention. In these drawings, the processing of steps 4100 to 4220 and steps 4250 to 4400 is the same as the processing of steps 1100 to 1220 and steps 1240 to 1400 described above, and thus the description thereof is omitted.

In FIG. 18A, in step 4000, peripheral information about the target pixel is acquired. Specifically, a processing area of 7 pixels × 7 pixels composed of 7 horizontal pixels and 7 vertical pixels centering on the target pixel is set, and the luminance Y is calculated from each pixel value of the processing area according to Expression (14). calculate.
Y = 0.299 × R + 0.587 × G + 0.114 × B (14)
Then, the average of Y 7 pixels × 7 pixel areas obtained for each pixel is used as the luminance value Y1 of the target pixel. Also, an average of the values of 7 pixels × 7 pixels of the attribute ground M is used as the pixel attribute value M ′ of the target pixel. Here, for each pixel value of the 7 pixel × 7 pixel region, by taking a weighted average in the vicinity of the target pixel and taking the weighted average, M ′ of the target pixel is emphasized with more emphasis on M and Y in the vicinity of the target pixel. , Y1 may be obtained.

  In step 4230 in FIG. 18B, a modulation code is calculated. Here, a processing area of 4 pixels × 4 pixels composed of 4 horizontal pixels and 4 vertical pixels with the target pixel at the upper left is set, and modulation is performed to add luminance to each pixel value of the processing area. A modulation code value F for switching between subtraction and modulation to be subtracted is calculated. The modulation code value F is F = + 1 when the coordinates of each pixel in the processing region are “even numbers in both x and y coordinates” or “odd in both x and y coordinates”, and F = − in other cases. Set to 1.

In step 4240, the modulation value is stored. Here, the modulation value of each pixel is stored from dY (determined in step 4220 as in step 1220 in the first embodiment), Y1, and F in each pixel of the processing region described above. The modulation value of each pixel can be obtained by the following equation.
dY = dY0 (Y1 ≧ Ys)
dY = dY0 + dYMax / 16 × (1−Y1 / Ys) (Y1 <Ys)
... Formula (15)
Here, dY0 is a modulation value of each pixel that has already been given by modulation by another pixel of interest. In the above equation, the modulation amount obtained by the target pixel is added equally to each pixel, but may be added with a weight in the vicinity of the target pixel.

  According to the present embodiment described above, the following effects are obtained in addition to the effects of the first embodiment. That is, the modulation amount can be determined based on pixel information of a plurality of pixels including the target pixel. As a result, by looking at the character attribute values of a plurality of pixels, it is possible to obtain a modulation result in which modulation switching is not noticeable in the image regardless of the accuracy of image area separation. In addition, by looking at the pixel values of a plurality of pixels, it is possible to guarantee luminance storage before and after modulation regardless of the characteristics of the printing form of the document. Furthermore, by modulating the pixel values of a plurality of pixels including the target pixel, it is possible to guarantee the luminance preservation before and after the modulation even for an image composed of a local region and a low number of pixels in the image.

  In the present embodiment, a plurality of input pixels / modulating pixels are described with respect to the first embodiment, but the same implementation can be applied to the second embodiment and the third embodiment. Is possible. When the pixel value is modulated by adding a random number in the second embodiment, as in this embodiment, it is ensured that the sum of the modulation amounts by the modulation processing in one pixel of the target pixel becomes zero. For example, the modulation amount for the target pixel itself is obtained in the same manner as in the second embodiment. Then, a value obtained by subtracting the previous modulation amount from 0 may be allocated to the remaining peripheral pixels at an arbitrary ratio and variance. Further, by providing a plurality of pixels for either input or modulation, the respective effects can be obtained, and this can be arbitrarily implemented in accordance with the restrictions on the performance and scale of the apparatus. The number and area of pixels referred to in the peripheral pixels of the input pixel and the number and area of pixels to be modulated are not limited to the number of pixels mentioned in the present embodiment, and can be arbitrarily set.

(Embodiment 5)
In the first to fourth embodiments described above, the modulation amount is determined based on the pixel value (or the average value thereof) and the attribute value (or the average value thereof). However, it may be more appropriate to determine the modulation amount depending on whether the image constituting the target pixel is an area gradation image or a density gradation image. For example, when the original is an area gradation image such as halftone printing, the gradation in a dark part or a dark part is expressed by how much a blank part is included in a gap between dots such as C, M, Y, and K. To do. That is, even if the luminance modulation processing according to the present invention is not performed, the blank portion is retained even after the luminance processing processing, so that it may be possible to perform copy output without any loss of gradation to some extent. When the present invention is applied to such an original, a high frequency component is newly added to the photographic image region, and the original image quality may be impaired.

  The present embodiment relates to a method for improving this.

  FIGS. 20A to 20C are flowcharts showing processing / correction processing according to the fourth embodiment of the present invention. In these drawings, Steps 5100 to 5210 and Steps 5250 to 5400 are the same as Steps 4100 to 4210 and Steps 4230 to 4400 described above, and thus description thereof is omitted here.

In step 5210 in FIG. 20B, a 7 pixel × 7 pixel processing area composed of 7 horizontal pixels and 7 vertical pixels centering on the target pixel is set, and an equation (16) is calculated from each pixel value of the processing area. ) To calculate the luminance Y.
Y = 0.299 × R + 0.587 × G + 0.114 × B (16)
The average of the Y values for each pixel in the 7 pixel × 7 pixel region is used as the luminance value Y1 of the target pixel. Further, an average of the values of the 7 pixel × 7 pixel region of the attribute M is used as the pixel attribute value M ′ of the target pixel. Here, with respect to each value of the 7 pixel × 7 pixel region, by taking a weighted average in the vicinity of the target pixel and taking a weighted average, M ′ of the target pixel is emphasized with more importance on M and Y in the target pixel and its vicinity. Y1 may be obtained.

Further, the maximum value YMax and the minimum value YMin are obtained for the Y value of the 7 pixel × 7 pixel region, and the Y luminance width Yw is obtained by the following equation.
Yw = YMax−YMin (17)

  Here, if the pixel of interest is a pixel of a density gradation image, the luminance values of the surrounding pixels also change continuously, YMax and YMin are close to each other, and Yw becomes small. On the other hand, if the pixel of interest is a pixel of an area gradation image, since there is a blank portion around it, YMax is selected as the luminance near white, so Yw is the case of a density gradation image. Compared to larger.

  That is, in the next step 5220, the luminance width obtained above is determined by the luminance width threshold value. Specifically, the luminance width Yw is compared with the luminance width threshold ThYw, and when Yw> ThYw, it is determined that the target pixel is a pixel of the area gradation image, and the process proceeds to step 5240. On the other hand, when Yw ≦ ThYw, it is determined that the target pixel is a pixel of the density gradation image, and the process proceeds to Step 5230.

  In step 5230, dY is obtained from the luminance modulation processing for the density gradation image. This can be obtained in the same manner as in step 4220. On the other hand, in step 5240, since it is a luminance modulation process for the area gradation image, dY = 0 is set.

  According to the present embodiment described above, the following effects are obtained in addition to the effects of the first embodiment. That is, when the original is a density gradation image such as a silver halide photograph, the continuous gradation portion is prevented from being crushed by luminance modulation, and when the original is an area gradation image such as halftone printing, the luminance is reduced. Image quality can be maintained without modulation.

In the present embodiment, whether the pixel of interest is a pixel of a density gradation image or a pixel of a luminance gradation image is determined by comparing Yw and ThYw. And the threshold value. For example, the variance may be taken for the value of the 7 × 7 area of Y. Further, the determination is performed by selecting one of two alternatives, but this may be performed in multiple stages. For example, instead of step 5220 to step 5240, a step of obtaining dY by the following equation without depending on the value of Yw may be provided.
dY = dY0 (Y1 ≧ Ys)
dY = dY0 + dYMax / 16 × (1−Y1 / Ys) × (255−Yw) / 255
(Y1 <Ys)
... Formula (503)
According to the above formula, dY decreases when Yw is sufficiently large, and dY increases when Yw is sufficiently small.

FIGS. 2A and 2B are an external perspective view and a perspective view of a multifunction printer (MFP) according to an embodiment of the present invention, respectively, in a state where an original platen lid that also serves as an auto document feeder is opened. FIG. 2 is a block diagram showing a configuration for executing control and image processing of the MFP shown in FIGS. 4 is a flowchart illustrating image processing executed at the time of copying in the MFP. FIG. 4 is a diagram illustrating a color gamut of a standard color space and a printer color gamut in a CIE-L * a * b * color system. It is a figure explaining an example of the gamut compression used by one Embodiment of this invention. It is a figure explaining the details of a white blow and black crushing. (A)-(c) is a figure explaining the processing unit which concerns on one Embodiment of this invention. It is a flowchart explaining the movement of the process unit which concerns on this embodiment. (A)-(c) is a flowchart which shows the detail of the correction | amendment and a process based on 1st embodiment of this invention. It is a diagram which shows the relational expression which calculates | requires the luminance modulation amount which concerns on 1st Embodiment. It is a figure explaining the relationship between the modulation code F which concerns on 1st Embodiment, and an attention pixel coordinate. (A)-(d) is a figure explaining how the gradation image of a continuous tone is modulated regarding black crushing. It is a figure which shows the one-dimensional lookup table which calculates | requires the brightness | luminance Y '' by the black crushing process which concerns on 1st Embodiment. (A)-(c) is a flowchart which shows the detail of the correction | amendment and a process based on 2nd embodiment of this invention. It is a diagram which shows the relationship between the replacement probability pY and the brightness | luminance Y1 which concern on 2nd Embodiment. (A) And (b) is a flowchart which shows the detail of the correction | amendment and a process based on 3rd embodiment of this invention. It is a diagram which shows the relational expression which calculates | requires the saturation modulation amount which concerns on 3rd Embodiment. (A)-(c) is a flowchart which shows the detail of the correction | amendment and a process based on 4th embodiment of this invention. (A)-(c) is a flowchart which shows the detail of the correction | amendment and a process based on 5th embodiment of this invention.

Explanation of symbols

1 MFP device 11 CPU
12 image processing unit 13 recording unit 14 reading unit 15 operation unit 16 ROM
17 RAM
18 Nonvolatile RAM
33 Recording device 34 Reading device

Claims (13)

  1. Based on the pixel values of the image data is defined in a predetermined color space, component representing the image, if within a predetermined range in the color space, processing of changing the value of the component based on the pixel value An image processing apparatus that executes image processing including:
    By adding the modulation amount of the component to the component based on the pixel value in the image data so that the component is out of the predetermined range, the number of pixels to be processed by the processing process is reduced. A pixel value modulating means for modulating the pixel value,
    An image processing apparatus comprising:
  2. It further comprises image discrimination means for discriminating the type of image data,
    2. The image processing apparatus according to claim 1, wherein when the image determining unit determines that the image is other than a character / line image, the pixel value modulating unit modulates a pixel value.
  3.   The image processing apparatus according to claim 1, wherein the processing process is a process of processing a gradation of a brightness component of an image.
  4.   The image processing apparatus according to claim 3, wherein the process for processing the gradation of the brightness component of the image is a black crushing process.
  5.   The image processing apparatus according to claim 1, wherein the processing process is a process of increasing a saturation component of an image.
  6. The pixel value modulation means, determining the modulation amount so modulated to increase or decrease the value of the component is performed by adding the modulation amount in accordance with a position where a pixel value takes on the color space 6. The image processing apparatus according to claim 1, wherein the image processing apparatus is characterized in that:
  7.   The image processing apparatus according to claim 6, wherein the pixel value modulation unit determines whether the modulation amount is positive or negative according to a position of a modulation target pixel.
  8.   The image processing apparatus according to claim 1, wherein the pixel value modulation unit performs modulation by replacing the pixel value with a probability corresponding to the pixel value.
  9.   The image processing apparatus according to claim 1, wherein the pixel value modulation unit stores a sum of pixel values of pixels to be modulated before and after the modulation of the pixel value.
  10.   8. The image processing apparatus according to claim 1, wherein the pixel value modulating unit modulates a pixel value according to a distribution of pixel values in image data.
  11.   The image according to any one of claims 1 to 7, wherein the pixel value modulation means performs different modulation depending on whether the image data constitutes an area gradation image or a density gradation image. Processing equipment.
  12. Based on the pixel values of the image data is defined in a predetermined color space, component representing the image, if within a predetermined range in the color space, processing of changing the value of the component based on the pixel value An image processing method for executing image processing including:
    By adding the modulation amount of the component to the component based on the pixel value in the image data so that the component is out of the predetermined range, the number of pixels to be processed by the processing process is reduced. A pixel value modulation step of modulating the pixel value to
    An image processing method characterized by comprising:
  13. It further has an image discrimination process for discriminating the type of image data,
    13. The image processing method according to claim 12, wherein when the image determining step determines that the image is other than a character / line image, the pixel value modulating step modulates a pixel value.
JP2007093551A 2007-03-30 2007-03-30 Image processing apparatus and image processing method Active JP4878572B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007093551A JP4878572B2 (en) 2007-03-30 2007-03-30 Image processing apparatus and image processing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2007093551A JP4878572B2 (en) 2007-03-30 2007-03-30 Image processing apparatus and image processing method
US12/056,407 US8482804B2 (en) 2007-03-30 2008-03-27 Image processing apparatus and image processing method

Publications (2)

Publication Number Publication Date
JP2008252699A JP2008252699A (en) 2008-10-16
JP4878572B2 true JP4878572B2 (en) 2012-02-15

Family

ID=39793800

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007093551A Active JP4878572B2 (en) 2007-03-30 2007-03-30 Image processing apparatus and image processing method

Country Status (2)

Country Link
US (1) US8482804B2 (en)
JP (1) JP4878572B2 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8159716B2 (en) * 2007-08-31 2012-04-17 Brother Kogyo Kabushiki Kaisha Image processing device performing image correction by using a plurality of sample images
JP4433017B2 (en) * 2007-08-31 2010-03-17 ブラザー工業株式会社 Image processing apparatus and image processing program
JP4442664B2 (en) * 2007-08-31 2010-03-31 ブラザー工業株式会社 Image processing apparatus, image processing method, and image processing program
US8174731B2 (en) * 2007-08-31 2012-05-08 Brother Kogyo Kabushiki Kaisha Image processing device outputting image for selecting sample image for image correction
US8094343B2 (en) * 2007-08-31 2012-01-10 Brother Kogyo Kabushiki Kaisha Image processor
JP4793356B2 (en) * 2007-08-31 2011-10-12 ブラザー工業株式会社 Image processing apparatus and image processing program
JP4692564B2 (en) * 2008-03-14 2011-06-01 富士ゼロックス株式会社 Color processing apparatus and program
JP4623137B2 (en) * 2008-05-14 2011-02-02 富士ゼロックス株式会社 Color processing apparatus, method and program
JP5210145B2 (en) * 2008-12-22 2013-06-12 キヤノン株式会社 Image processing method and image processing apparatus
JP4715915B2 (en) 2008-12-25 2011-07-06 ブラザー工業株式会社 Color conversion table creation device, color conversion table creation method, and color conversion table creation program
JP5479219B2 (en) 2010-05-24 2014-04-23 キヤノン株式会社 Image processing apparatus and image processing method
US9623671B2 (en) 2010-05-24 2017-04-18 Canon Kabushiki Kaisha Image processor, printing apparatus, and image processing method
US9694598B2 (en) 2010-05-24 2017-07-04 Canon Kabushiki Kaisha Image processing apparatus, ink jet printing apparatus, and image processing method
JP5436388B2 (en) 2010-10-05 2014-03-05 キヤノン株式会社 Image processing apparatus, image processing method, and image recording apparatus
JP5541721B2 (en) 2010-10-05 2014-07-09 キヤノン株式会社 Image processing apparatus and image processing method
JP5465145B2 (en) 2010-10-05 2014-04-09 キヤノン株式会社 Image processing apparatus, image processing method, and image recording apparatus
JP5436389B2 (en) 2010-10-05 2014-03-05 キヤノン株式会社 Image processing apparatus and image processing method
JP6234098B2 (en) 2013-07-19 2017-11-22 キヤノン株式会社 Image processing apparatus, image processing method, and program
KR102019679B1 (en) * 2013-08-28 2019-09-10 삼성디스플레이 주식회사 Data processing apparatus, display apparatus including the same, and method for gamut mapping
JP2018098736A (en) 2016-12-16 2018-06-21 キヤノン株式会社 Image processing device, image processing method, and program

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10155087A (en) 1996-11-20 1998-06-09 Ricoh Co Ltd Image processor
US6738527B2 (en) * 1997-06-09 2004-05-18 Seiko Epson Corporation Image processing apparatus, an image processing method, a medium on which an image processing control program is recorded, an image evaluation device, and image evaluation method and a medium on which an image evaluation program is recorded
JP3886727B2 (en) * 1999-02-25 2007-02-28 富士通株式会社 Image processing device
JP3763720B2 (en) * 1999-05-07 2006-04-05 松下電器産業株式会社 Image processing apparatus and image processing method
JP2000333003A (en) * 1999-05-20 2000-11-30 Canon Inc Image forming device, method for controlling image forming device and computer-readable storage medium storing program
JP2001144943A (en) 1999-11-15 2001-05-25 Canon Inc Image processing method and image processing unit
US6704123B1 (en) * 1999-12-17 2004-03-09 Creo Inc. Method for applying tonal correction to a binary halftone image
TW514876B (en) * 2000-01-31 2002-12-21 Sony Corp Digital picture signal processing apparatus, method thereof, digital picture recording apparatus, method thereof, transmitting method thereof, and data record medium thereof
JP2001218021A (en) * 2000-02-04 2001-08-10 Canon Inc Picture processing method and picture processor
JP2001251513A (en) 2000-03-06 2001-09-14 Ricoh Co Ltd Image processing method, image processor and storage medium
JP4035278B2 (en) 2000-07-14 2008-01-16 キヤノン株式会社 Image processing method, apparatus, and recording medium
US7009734B2 (en) 2000-08-22 2006-03-07 Canon Kabushiki Kaisha Method and apparatus for forming color transform lookup table, and image processing method
JP2002262124A (en) 2000-11-30 2002-09-13 Canon Inc Image processor and method, and recording control method and device and printer driver
JP2002218271A (en) 2001-01-19 2002-08-02 Sharp Corp Image processor, image formation device and image, processing method
GB0120246D0 (en) * 2001-08-20 2001-10-10 Crabtree John C R Image processing method
JP4011933B2 (en) 2002-02-22 2007-11-21 キヤノン株式会社 Image processing apparatus and method
US20040165081A1 (en) * 2002-12-05 2004-08-26 Hiroyuki Shibaki Image processing apparatus, image processing system, and image processing method
US7463393B2 (en) * 2003-05-19 2008-12-09 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
TWI245557B (en) * 2003-09-11 2005-12-11 Matsushita Electric Ind Co Ltd Image compensation apparatus and method for the same
JP2005107872A (en) * 2003-09-30 2005-04-21 Fuji Photo Film Co Ltd Image processing apparatus and method, and program
JP4632438B2 (en) 2005-08-02 2011-02-23 キヤノン株式会社 Color processing method, and color processing apparatus and method for creating a lookup table
JP4594185B2 (en) 2005-08-02 2010-12-08 キヤノン株式会社 Color processing method and apparatus
JP4623300B2 (en) * 2005-12-17 2011-02-02 富士ゼロックス株式会社 Image processing apparatus and image processing program
JP4830652B2 (en) * 2006-06-12 2011-12-07 日産自動車株式会社 Image processing apparatus and image processing method
JP4890974B2 (en) 2006-06-29 2012-03-07 キヤノン株式会社 Image processing apparatus and image processing method
US7768671B2 (en) * 2006-09-18 2010-08-03 Xerox Corporation Color image gamut enhancement preserving spatial variation
US8154778B2 (en) * 2007-11-15 2012-04-10 Sharp Laboratories Of America, Inc Systems and methods for color correction processing and notification for digital image data generated from a document image

Also Published As

Publication number Publication date
US8482804B2 (en) 2013-07-09
JP2008252699A (en) 2008-10-16
US20080239410A1 (en) 2008-10-02

Similar Documents

Publication Publication Date Title
US7760934B2 (en) Color to grayscale conversion method and apparatus utilizing a high pass filtered chrominance component
US7209262B2 (en) Method and apparatus for processing image signal and computer-readable recording medium recorded with program for causing computer to process image signal
JP2993014B2 (en) Image quality control method for image processing device
US8477324B2 (en) Image processor and image processing method that uses s-shaped gamma curve
US7944588B2 (en) Image correction processing apparatus, image correction processing method, program, and storage medium
US6414690B1 (en) Gamut mapping using local area information
US6393148B1 (en) Contrast enhancement of an image using luminance and RGB statistical metrics
US7699423B2 (en) Image processing apparatus, image processing method, and image processing program
US7411707B2 (en) Image processing apparatus and method thereof
CA2285843C (en) Automated enhancement of print quality based on feature size, shape, orientation, and color
JP4308392B2 (en) Digital image processing method and mapping method
JP4067532B2 (en) Color conversion apparatus, image forming apparatus, color conversion method, computer program, and recording medium
JP5067276B2 (en) Color conversion method, color conversion table generated by the color conversion method, image processing apparatus, and color conversion program
JP4637063B2 (en) Image processing apparatus, image processing method, and program
US7940434B2 (en) Image processing apparatus, image forming apparatus, method of image processing, and a computer-readable storage medium storing an image processing program
US6175427B1 (en) System and method of tonal correction of independent regions on a compound document
US9349161B2 (en) Image processing apparatus and image processing method with edge enhancement
US7667711B2 (en) Image processing system, a method thereof, and a recording medium thereof
US20060203270A1 (en) Color converting device emphasizing a contrast of output color data corresponding to a black character
JP4771538B2 (en) Color conversion table generation method, color conversion table, and color conversion table generation apparatus
JP4331159B2 (en) Image processing apparatus, image forming apparatus, image processing method, image processing program, and recording medium therefor
US9036199B2 (en) Image processing apparatus for performing color matching processing, image processing method, and computer-readable medium
JP3962496B2 (en) Image processing method, apparatus, and recording medium
US7965426B2 (en) Image processing apparatus and method for performing gamut mapping via device-independent standard color space
JP4753638B2 (en) Document compression method, system for compressing document, and image processing apparatus

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100330

RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20101106

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110512

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110527

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110726

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20111125

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20111128

R151 Written notification of patent or utility model registration

Ref document number: 4878572

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141209

Year of fee payment: 3