JP4966787B2 - Color image forming apparatus and color image correction method - Google Patents

Color image forming apparatus and color image correction method Download PDF

Info

Publication number
JP4966787B2
JP4966787B2 JP2007220351A JP2007220351A JP4966787B2 JP 4966787 B2 JP4966787 B2 JP 4966787B2 JP 2007220351 A JP2007220351 A JP 2007220351A JP 2007220351 A JP2007220351 A JP 2007220351A JP 4966787 B2 JP4966787 B2 JP 4966787B2
Authority
JP
Japan
Prior art keywords
interpolation
image
processing
image data
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2007220351A
Other languages
Japanese (ja)
Other versions
JP2009055377A (en
Inventor
陽子 井戸
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to JP2007220351A priority Critical patent/JP4966787B2/en
Publication of JP2009055377A publication Critical patent/JP2009055377A/en
Application granted granted Critical
Publication of JP4966787B2 publication Critical patent/JP4966787B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/58Edge or detail enhancement; Noise or error suppression, e.g. colour misregistration correction

Description

  The present invention relates to a color image forming apparatus and a color image forming method, and more particularly, to a tandem type electrophotographic color image forming apparatus and a color image correction method in which image forming portions for respective color components are independent.

  As a color image forming apparatus such as a printer or a copying machine, an electrophotographic image forming unit having the same number as the number of color components is provided, and a toner image of each color component is sequentially transferred onto a print medium by each image forming unit. There is a color image forming apparatus. The image forming unit for each color includes a developing machine and a photosensitive drum. In a tandem color image forming apparatus, it is known that there are a plurality of factors that cause a positional shift (referred to as registration shift) of each color component image.

  Factors include non-uniformity and mounting position deviation of the deflection scanning unit including an optical system such as a polygon mirror and an fθ lens, and a mounting position deviation of the polarization scanning unit to the image forming apparatus main body. Due to these positional shifts, the scanning line does not become a straight line parallel to the rotational axis of the photosensitive drum, and the shape is inclined or bent. If the degree of the inclination or the bending of the scanning line (hereinafter referred to as a profile or the shape of the scanning line) is different for each color, registration deviation occurs.

  The profile has different characteristics for each image forming apparatus, that is, for each recording engine, and for each change scanning unit of each color. An example of the profile is shown in FIGS. 6 (a) to 6 (d). In FIG. 6, the horizontal axis indicates the position in the main scanning direction in the image forming apparatus. A line 600 expressed linearly in the main scanning direction shows an ideal characteristic (profile) of the scanning line without bending. A curve 601, a curve 602, a curve 603, and a curve 604 indicate profiles for each color, and cyan (hereinafter C), magenta (hereinafter M), yellow (hereinafter Y), black (hereinafter, An example of the profile of the scanning line of K) is shown. The vertical axis indicates the amount of deviation in the sub-scanning direction with respect to ideal characteristics. As can be seen from the figure, the profile curves are different for each color. When an electrostatic latent image is formed on the photosensitive drum of the image forming unit corresponding to each color, the difference in the profile is different for each color. Appears as misregistration of image data.

  As a method for dealing with registration deviation, Patent Document 1 discloses that in the assembly process of the polarization scanning device, the amount of bending of the scanning line is measured using an optical sensor and the lens is mechanically rotated to scan the scanning line. A method of fixing with an adhesive after adjusting the bending of is described.

  In Patent Document 2, in the step of assembling the polarization scanning device into the color image forming apparatus main body, the inclination of the scanning line is measured using an optical sensor, and the polarization scanning device is mechanically inclined to adjust the inclination of the scanning line. The method for assembling the main body of the color image forming apparatus is described above.

  Patent Document 3 describes a method of measuring the inclination of a scanning line and the amount of bending using an optical sensor, correcting bitmap image data so as to cancel them, and forming the corrected image. Yes. That is, the deviation of the actual scanning line from the straight line on the surface of the photosensitive drum parallel to the rotational axis of the photosensitive drum, that is, the ideal scanning line, is canceled by shifting the image data in the opposite direction by the same amount. Since this method corrects image data, a mechanical adjustment member and an adjustment process during assembly are not required. Accordingly, it is possible to reduce the size of the color image forming apparatus, and it is possible to deal with registration deviation at a lower cost than the methods described in Patent Documents 1 and 2. This electrical registration error correction is divided into correction for each pixel and correction for less than one pixel. In the correction in units of one pixel, as shown in FIG. 15, the pixels are shifted (offset) in the sub-scanning direction in units of one pixel in accordance with the correction amount of inclination and curvature. In the following description, the offset position is referred to as a transfer point, and the offset process is referred to as a line transfer process. That is, in FIG. 15A, P1 to P5 correspond to transfer points.

  In FIG. 15, a scanning line profile 1501 is a correction target. For example, the profile 1501 may be represented by a column of coordinate values of pixels on the scanning line, but in FIG. 15, the profile 1501 is represented by an approximate straight line divided for each region. The transfer point is a position in the main scanning direction where the profile is scanned in the main scanning direction and a shift of one pixel is generated with respect to the sub scanning direction. In FIG. 15, it corresponds to P1 to P5. With this transfer point as a boundary, the dots after the transfer point are shifted by one line in the direction opposite to the shift in the sub-scanning direction in the profile. This is done by paying attention to each line. FIG. 15B shows an example of image data shifted in the sub-scanning direction at each transfer point in this way. In the drawing, a hatched portion 1511 is one line before the line transfer process, that is, one line in the original image data. As a result of the line transfer process, each line is shifted in a direction to cancel the profile shift with respect to the sub-scanning direction. FIG. 15C is an example of the image data obtained as described above. The hatched portion is one line before correction. At the time of image formation, the corrected image data is formed line by line. For example, normal image formation is performed in the order of the line 1521, the line 1522, and so on. As a result, the hatched portion constituting one line in the image data before correction is formed on an ideal scanning line that should be originally formed after image formation. However, since the line transfer process is performed in units of one pixel, a shift within one pixel remains in the sub-scanning direction.

  Therefore, a shift of less than one pixel that cannot be corrected by the line transfer process is corrected by adjusting the gradation value of the bitmap image data with pixels before and after in the sub-scanning direction. That is, when the profile characteristics indicate an upward inclination in the scanning direction, the bitmap image data before gradation correction is arranged in a pixel row inclined in the opposite direction to the inclination indicated by the profile (downward in this example). to correct. In order to bring this closer to the ideal image data after correction, gradation correction is performed in the vicinity of the transfer point to smooth the step at the transfer point. This smoothing can be realized by, for example, the width and intensity of the laser pulse. The tone correction for smoothing performed after the line transfer process is hereinafter referred to as interpolation process.

  However, depending on the nature of the image, there are image data that is preferably subjected to the interpolation process, and image data that deteriorates the image quality when the interpolation process is performed. For example, the same pattern or repeated patterns of patterns (hereinafter referred to as pattern images) and characters / thin lines that can be drawn with office document creation software are smoothed by performing interpolation processing to improve the visibility of information. . On the other hand, when interpolation processing in the vicinity of the transfer point is performed on the continuous tone image subjected to the screen processing, there is a problem that density unevenness occurs only in the vicinity of the transfer point and the image quality deteriorates. This is because, for example, when a line growth screen is used, the thickness of the lines constituting the screen on the transfer point is changed by the interpolation processing, so that the density appears to change when viewed macroscopically. In addition, if interpolation processing is performed on an add-on image such as a background pattern including a background region and a latent image region described in Japanese Patent Application Laid-Open No. 2004-223854, the effect may be impaired, and therefore interpolation processing is not suitable.

In this way, image degradation occurs when the presence / absence of interpolation processing is made uniform for the entire image without considering the characteristics of the image data. Therefore, it is necessary to determine whether to apply the interpolation process according to the attribute or feature of the target image data. Therefore, an invention has been proposed in which feature detection is performed on an image before halftone processing or correction processing in units of one pixel, and halftone processing or exception processing is applied according to the detection result (Patent Document). 4 etc.). Here, the exception processing includes interpolation processing for less than one pixel.
JP 2002-116394 A JP 2003-241131 A JP 2004-170755 A JP 2006-297633 A

  However, the invention of Patent Document 4 is based on the premise that image data before halftone processing or one-pixel unit correction processing is to be processed. In other words, the feature detection is performed from the input continuous tone image data, and either halftone processing or exceptional processing is performed according to the result.

  On the other hand, the amount of data of image data is compressed by quantization processing, and the load of data processing is reduced. Therefore, it is indispensable for dealing with the limitation of processing resources and speeding up the processing speed. Therefore, even if the input data is continuous tone image data, it is desirable to first reduce the amount of data by quantization to reduce the subsequent processing load.

  However, in the above-described conventional technology, there is no mention of a method of performing feature detection for image data after performing halftone processing (quantization processing) and performing subsequent processing. Therefore, image deterioration may occur for image data binarized by FAX reception print or an application.

  Conventionally, in order to correct the deviation of the scanning line, the line is smoothed by the interpolation process after the line transfer process. However, the image data after the line transfer process has lost continuity in one line, and the conventional technique cannot detect the feature from the image data itself.

  The present invention has been made in view of the above conventional example, and an object thereof is to solve the above problems. More specifically, appropriate interpolation is also applied to image data that has lost attribute information, has undergone halftone processing or pixel-by-pixel correction processing, or has lost continuity due to line transfer processing. It is an object to provide a color image forming apparatus and a color image correction method capable of performing processing.

In order to achieve the above object, the present invention comprises the following arrangement. That is, a color image forming apparatus that includes an image forming unit that forms an image for each color component and forms a color image by superimposing images of the respective color components,
So as to cancel the shift amount of the sub-scanning direction of the scanning line on the image carrier in the image forming unit for each color component of the halftone image data to be processed, the scan line changing of shifting the position of the picture element in the sub-scanning direction Processing means;
An interpolation prohibition region determination means for detecting a continuous tone image region of the halftone image data and determining the region of the continuous tone image as an interpolation prohibition region;
Interpolation processing means for performing interpolation processing on the halftone image data excluding the interpolation-prohibited area, which performs smoothing of pixel-unit shift caused by shifting the image data by the line transfer processing means.

  According to the present invention, it is also suitable for image data in which attribute information is lost, image data after performing halftone processing or pixel-by-pixel correction processing, or image data in which continuity is lost due to line transfer processing. The effect is that interpolation processing can be performed. This

  Hereinafter, embodiments of the present invention will be described with reference to the drawings. In this embodiment, a halftone image is formed according to the deviation of an actual scanning line with respect to an ideal scanning line that should be originally formed by scanning the surface of the photosensitive drum, that is, a scanning line parallel to the rotational axis of the photosensitive drum. Offset by shifting the data in the opposite direction by the same amount. When smoothing the shift, if the type of image (also referred to as image attribute) is the same for each color plate, the presence or absence of smoothing processing for each color plate is unified.

  That is, a line that shifts the position of each pixel in the sub-scanning direction for each color component of the halftone image data to be processed so as to cancel out the shift amount of the scanning line on the image carrier in the image forming unit in the sub-scanning direction. Perform the transfer process. Thereafter, type determination is performed to determine the type of image for each color component of the image data to be processed. Specifically, it is determined whether the image is a continuous tone image or a pattern image. The pattern image is an image including a repetitive pattern as described in the background art. Then, it is determined based on the type of image for each determined color component whether or not to perform an interpolation process for smoothing a pixel unit shift caused by the line transfer process. At that time, the presence or absence of interpolation processing is determined for each type of image. However, if there is a first color component determined to be a pattern image and a continuous tone image, and this is different from the determination of the presence or absence of the interpolation processing of other color components, is the interpolation processing for the first color component performed? Change the decision result. In this way, when the image type is common to each color component and the interpolation determination result is different for each color component, the determination result is changed to match the interpolation determination result for all color components.

  A configuration example of a laser beam printer and an image correction method executed by the laser printer will be described below as an example of an image forming apparatus applicable as an embodiment of the present invention. The present embodiment is applicable not only to laser beam printers but also to other types of output devices such as inkjet printers and MFPs (Multi Function Printers / Multi Function Peripherals). However, a meaningful printer to which the present invention is applied is a printer that includes an image forming unit for each color component and may cause registration deviation between images of each color component. In the case of an inkjet printer, registration deviation may occur if the printer is a serial printer in which recording heads for each color component are mounted on independent carriages or a line head printer to which recording heads for each color component can be attached independently. is there. Therefore, applying the invention according to the present embodiment to these printers is effective in improving the image quality. However, it is a tandem type color laser printer that has a high possibility that the profile of the scanning line is different for each color component. In the present embodiment, this will be described as an example.

<Tandem Color LBP Image Forming Unit>
FIG. 4 is a diagram illustrating the configuration of each block related to electrostatic latent image creation in the electrophotographic color image forming apparatus of the first embodiment. The color image forming apparatus includes a color image forming unit 401 and an image processing unit 402. The image processing unit 402 generates bitmap image information, and the color image forming unit 401 forms an image on a recording medium based thereon. The image processing unit 402 also performs correction processing such as registration deviation correction with reference to the profile information 416C, 416M, 416Y, and 416K for each image forming unit of color components measured in advance and stored in the profile storage unit 403. In the following description, the reference numerals with the color symbols C, M, Y, and K attached to the respective color components may take the symbols of the colors and may be generic names. Here, the image forming unit is a name for forming a monochrome image for each color component including the scanner unit 414 and the printing unit 415. The printing unit 415 is a unit for forming a toner image including a photosensitive drum, a transfer drum, and the like, and of course forms an image other than characters.

  FIG. 2 is a cross-sectional view of a tandem color image forming unit 401 that employs the intermediate transfer member 28 as an example of an electrophotographic color image forming apparatus. The operation of the color image forming unit 401 in the electrophotographic color image forming apparatus will be described with reference to FIG. The color image forming unit 401 drives exposure light according to the exposure time processed by the image processing unit 402, forms an electrostatic latent image on the photosensitive drum, that is, the image carrier, and develops the electrostatic latent image. Thus, a single color toner image of each color component is formed. The single-color toner image is superimposed on the intermediate transfer member 28 to form a multi-color toner image, and the multi-color toner image is transferred to the printing medium 11 to thermally fix the multi-color toner image. The intermediate transfer member is also an image carrier. The charging means includes four injection chargers 23Y, 23M, 23C, and 23K for charging the photoreceptors 22Y, 22M, 22C, and 22K for each of Y, M, C, and K colors. Includes sleeves 23YS, 23MS, 23CS, and 23KS.

  The image carriers, that is, the photosensitive members (photosensitive drums) 22Y, 22M, 22C, and 22K are rotated counterclockwise by the drive motor in accordance with the image forming operation. Scanner units 414Y, 414M, 414C, and 414K as exposure means irradiate the photosensitive members 22Y, 22M, 22C, and 22K with exposure light, and selectively expose the surfaces of the photosensitive members 22Y, 22M, 22C, and 22K. As a result, an electrostatic latent image is formed on the surface of the photoreceptor. Developing units 26Y, 26M, 26C, and 26K, which are developing units, perform toner development for each color of Y, M, C, and K in order to visualize the electrostatic latent image. Each developing device is provided with sleeves 26YS, 26MS, 26CS, and 26KS. Each developing device 26 is detachable. The scanner unit can express the gradation of each pixel according to the width and intensity of the laser beam. For example, 16 gradations can be expressed.

  The primary transfer rollers 27Y, 27M, 27C, and 27K as transfer means press the intermediate transfer member 28 that rotates clockwise against the photosensitive members 22Y, 22M, 22C, and 22K, and transfer the toner image on the photosensitive member to the intermediate transfer member. Transfer to 28. By applying an appropriate bias voltage to the primary transfer roller 27 and making a difference between the rotation speed of the photoconductor 22 and the rotation speed of the intermediate transfer body 28, the monochromatic toner image is efficiently transferred onto the intermediate transfer body 28. This is called primary transfer.

  A multicolor toner image obtained by synthesizing a single color toner image for each station (each color component image forming unit may be called a row) is conveyed to the secondary transfer roller 29 as the intermediate transfer body 28 rotates. The multicolor toner image on the intermediate transfer body 28 is transferred onto the print medium 11 that is nipped and conveyed from the paper feed tray 21 to the secondary transfer roller 29. An appropriate bias voltage is applied to the secondary transfer roller 29 to electrostatically transfer the toner image. This is called secondary transfer. The secondary transfer roller 29 contacts the print medium 11 at the position 29a while transferring the multicolor toner image onto the recording medium 11, and is separated to the position 29b after the printing process.

  The fixing unit 31 is used to press and fix the fixing roller 32 that heats the printing medium 11 and the recording medium 11 to the fixing roller 32 in order to melt and fix the multicolor toner image transferred to the printing medium 11 to the printing medium 11. A pressure roller 33 is provided. The fixing roller 32 and the pressure roller 33 are formed in a hollow shape, and heaters 34 and 35 are incorporated therein. The fixing unit 31 conveys the print medium 11 holding the multicolor toner image by the fixing roller 32 and the pressure roller 33 and applies heat and pressure to fix the toner on the print medium 11.

  The print medium 11 after toner fixing is then discharged to a discharge tray (not shown) by a discharge roller (not shown), and the image forming operation is completed. The cleaning unit 30 cleans the toner remaining on the intermediate transfer member 28. Waste toner remaining after transferring the four-color multicolor toner image formed on the intermediate transfer member 28 to the recording medium 11 is stored in a cleaner container. As described above, the tandem color LBP has an image forming unit including the printing unit 415 and the scanner unit 414 for each color component.

<Profile characteristics of scanning lines>
Next, the profile characteristics of the actual scanning line 302 for each color of the image forming apparatus will be described with reference to FIG. In FIG. 3, the scanning line 302 is tilted and bent due to the positional accuracy and diameter deviation of the photosensitive member 22 and the positional accuracy of the optical system in the scanner unit 24 (24C, 24M, 24Y, 24K) shown in FIG. The actual scanning line in which the The image forming apparatus has different profile characteristics for each recording device (recording engine). Further, in the case of a color image forming apparatus, the characteristics differ for each color.

  FIG. 3A is a diagram showing a part of profile characteristics of the image forming apparatus, and shows a region shifted upward in the sub-scanning direction. FIG. 3B shows a region that is shifted downward in the sub-scanning direction. A horizontal axis 301 is an ideal scanning line, and shows characteristics when scanning is performed perpendicularly to the rotation direction of the photosensitive member 22, that is, when an operation is performed in parallel with the rotation axis. In FIG. 3, the profile is shown as a graph, but the profile stored in the profile information 416 is discrete data. For example, each time the actual scan line is separated or approaches one pixel from the ideal scan line from the scan line start position P0, the actual scan line is the ideal scan line in association with the position. The direction of movement indicating whether to move away or approach is stored. It suffices if the position can identify the pixel number in the scanning line direction. Accordingly, the profile 302 is approximately indicated by line segments 311, 312, 313, and 314 in the profile information. This is sufficient for correction of registration deviation.

  Hereinafter, the profile characteristics in the description will be described on the assumption that the image processing unit 402 should perform correction. However, since the representation method is merely an agreement, any representation method may be adopted as long as the amount of deviation and the direction can be uniquely specified. For example, the color image forming unit 401 may be defined as a shift direction, and the image processing unit 402 may be configured to correct the reverse characteristics.

  FIG. 7 shows the correlation between the direction to be corrected by the image processing unit 402 based on the profile definition and the scanning line shift direction in the color image forming unit 401. When the profile characteristics of the color image forming unit 401 are shown as shown in FIG. 7A, the image processing unit 402 shifts the image data in the sub-scanning direction as shown in FIG. . Conversely, when the profile characteristics of the color image forming unit 401 are shown as shown in FIG. 7C, the image processing unit 402 shifts the image data in the sub-scanning direction as shown in FIG. 7D. However, the deviation amount is based on the ideal scanning line 301.

  For example, as shown in FIG. 9, the profile characteristic data (profile information) includes the pixel position in the main scanning direction of the transfer point and the direction of change in the scanning line up to the next transfer point. Specifically, the transfer points P1, P2, P3,... Pm are defined for the profile characteristics of FIG. The definition of each transfer point is a point at which a shift of one pixel occurs in the scanning line in the sub-scanning direction, and the direction may change upward or downward until the next transfer point. For example, at the transfer point P2, the scanning line is shifted by one line upward in the figure. That is, it is a transfer point for transferring from the current line to a line one line below. The shift direction at the position P2 is upward (↑) as shown in FIG. However, in image processing, switching to the lower line is performed. Similarly, also in the position P3, the shift direction is upward (↑). The shift direction with respect to the sub-scanning direction at the transfer point P4 is downward (↓) unlike the previous direction. As a method of holding data in this direction, for example, if “1” is indicated as data indicating upward and “0” is indicated as data indicating downward, FIG. 9C is obtained. In this case, the number of data to be held is only the same as the number of transfer points, and if the number of transfer points is m, the number of bits to be held is also m bits.

<Transfer points>
Next, with reference to FIG. 3A, a transfer point in a region shifted upward in the laser scanning direction will be described. The transfer point in the present embodiment indicates a point that is shifted by one pixel in the sub-scanning direction. That is, in FIG. 3A, P1, P2, and P3, which are points shifted by one pixel in the sub-scanning direction on the upward curve characteristic 302, correspond to transfer points. In FIG. 3A, P0 is used as a reference. As can be seen from the figure, the distances (L1, L2) between the transfer points are shorter in the region where the curve characteristic 302 is changing rapidly, and longer in the region where the curve characteristic is changing gradually.

  Next, with reference to FIG. 3B, a transfer point in a region shifted downward in the laser scanning direction will be described. Even in a region that shows a characteristic that is shifted downward, the definition of a transfer point indicates a point that is shifted by one pixel in the sub-scanning direction. That is, in FIG. 3B, Pn and Pn + 1, which are points shifted by one pixel in the sub-scanning direction on the downward curve characteristic 302, correspond to the transfer points. Also in FIG. 3B, as in FIG. 3A, the distance (Ln, Ln + 1) between the transfer points is short in the region where the curve characteristic 302 is rapidly changing, and is the region where it is changing gently. It will be longer.

  As described above, the transfer point is closely related to the degree of change in the bending characteristic 302 of the image forming apparatus. Therefore, the number of transfer points increases in an image forming apparatus having a sharp curve characteristic, and conversely, the number of transfer points decreases in an image formation apparatus having a gentle curve characteristic.

  If the bending characteristics of the image forming unit are different for each color, the number and position of transfer points are also different. This difference in scanning line profile between colors appears as a registration error in an image in which all color toner images are transferred onto the intermediate transfer member 28. The present invention relates to processing at this transfer point.

<Tandem Color LBP Image Processing Unit>
Next, the image processing unit 402 in the color image forming apparatus will be described with reference to FIG. The image generation unit 404 generates raster image data that can be printed from print data received from a computer device (not shown), and outputs the RGB data and attribute data indicating the data attribute of each pixel for each pixel. Note that the image generation unit 404 may be configured not to read image data received from a computer device or the like, but to configure reading means inside the color image forming apparatus and handle image data from the reading means. The color conversion unit 405 converts RGB data into CMYK data according to the toner color of the color image forming unit 401, and stores the CMKY data and attribute data in the storage unit 406. The storage unit 406 is a first storage unit configured in the image processing unit 402 and temporarily stores dot image data to be subjected to print processing. The storage unit 406 may be configured by a page memory that stores dot image data for one page, or may be configured as a band memory that stores data for a plurality of lines. The dot image data is also called raster image data.

  The halftone processing units 407C, 407M, 407Y, and 407K perform halftone processing on the attribute data and each color data output from the storage unit 406. As a specific configuration of the halftone processing unit, there is a screen processing (that is, dither processing) or an error diffusion processing. The screen process is a process of N-value using a predetermined plurality of dither matrices and input image data. Further, the error diffusion process performs N-value conversion by comparing the input image data with a predetermined threshold value, and the difference between the input image data and the threshold value at that time is applied to the surrounding pixels to be N-valued thereafter. This is a process of spreading. In this embodiment, screen processing is performed. In this embodiment, N is 2, but the number of bits per pixel is 4 bits. That is, the pixel value is converted to 0 or 15 by the quantization process.

  The second storage unit 408 is configured inside the image forming apparatus, and stores N-valued (halftone image) data processed by the halftone processing unit 407 (407C, 407M, 407Y, 407K). Note that when the pixel position subjected to image processing in the processing block downstream of the second storage unit 408 is a transfer point, transfer for one line is performed at the time of reading from the second storage unit 408. Specifically, the address of the dot to be read is not advanced to the next dot, but is advanced by one line from the next dot, or is returned by one line. Whether to advance or return by one line is determined according to the shift direction.

  FIG. 8A is a diagram schematically showing the state of data held in the storage unit 408 in FIG. As shown in FIG. 8A, in the state stored in the storage unit 408, the halftone processing unit 407 is independent of the correction direction as the image processing unit 402 or the curve characteristic of the scanning line of the image forming unit 401. Data after processing by is held. When the profile characteristic as the direction to be corrected by the image processing unit 402 is downward when the line 701 in FIG. 8 is read, as shown in FIG. It will be shifted. When the profile characteristic as the direction to be corrected by the image processing unit 402 is upward, when the image data of the line 701 is read from the storage unit 408, as shown in FIG. As a boundary, it is shifted downward by one pixel.

  The interpolation determination units 409C, 409M, 409Y, and 409K for each color are pixels that require interpolation in the subsequent processing as the processing of the pixels before and after the transfer point of the input N-ary data, or do not perform interpolation. Determine if it is a good pixel. The timing adjustment units 410C, 410M, 410Y, and 410K synchronize the N-valued data read from the storage unit 408 with the determination result of the interpolation determination unit 409. The transfer buffers 411C, 411M, 411Y, and 411K temporarily hold the output data of the interpolation determination unit 409 and the timing adjustment unit 410. In this description, the first storage unit 406, the second storage unit 408, and the transfer buffer 411 have been described as separate configurations. However, a common storage unit may be configured inside the image forming apparatus.

  The interpolation processing units 412C, 412M, 412Y, and 412K perform interpolation processing on the received data from the transfer buffer 411 based on the determination result by the interpolation determination unit 409 that is similarly transferred from the transfer buffer. Although the determination result from the interpolation determination unit 409 is determination for each pixel, the interpolation processing in the interpolation processing unit 412 uses the pixels before and after the transfer point corresponding to the profile (bending characteristic) of the image forming apparatus. 5A and 5B show interpolation methods at transfer points (FIGS. 5A and 5B are collectively referred to as FIG. 5).

<Interpolation process>
FIG. 5A is a diagram illustrating the curve characteristic of the scanning line of the image forming apparatus with respect to the laser scanning direction. A region 1 is a region where the image processing unit 402 needs to perform correction downward, and a region 2 is a region where the image processing unit 402 needs to perform correction upward. In the following description of the interpolation processing, the minimum interval between transfer points is 16 pixels for convenience of explanation, but the present invention is not limited to this. In other words, the interval may be an arbitrary number of pixels, or the pixel interval may be a power of 2 to reduce the circuit configuration. That is, interpolation, ie, smoothing described later, is performed on the 16 pixels immediately before the transfer point in the main scanning direction. If the interval between transfer points is longer than 16 pixels, the portion before the smoothed region (left side in the figure) is left unsmoothed. The reason why the number of pixels is 16 is that, in this example, one pixel that is binarized is 4 bits and can be expressed in 16 gradations by the gradation expression capability of the image forming unit. A step between lines can be smoothed by changing the density for each gradation of one pixel value.

  FIG. 5B shows the pre-transfer images before and after the transfer point Pc in the example of FIG. 5, that is, the output halftone image data of the halftone processing unit 407. The attention line is the center line of the image data for three lines shown in the figure. FIG. 5C shows the data 503 after the transfer process in units of one pixel when paying attention to the line of interest, that is, the image data configuration at the time of output of the storage unit 408. Since the line transfer process is performed at the time of reading from the storage unit 408, the pixel configuration before and after the transfer point Pc at the time of input to the interpolation processing unit 412 is a level difference of one line with the transfer point Pc as a boundary. It appears.

  The interpolation processing unit 412 performs interpolation processing on image data that appears as a step on the line of interest. Since the correction direction in the region 1 is upward, the interpolation process for the line of interest is performed by weighting calculation with the image data of the subsequent line. As shown in FIG. 5D, the weighting in the present description is such that the total sum of the two pixels in the sub-scanning direction as the target becomes 16 in accordance with the minimum value of the transfer point. Of course, this is only an example, and the total sum of image values is not limited to 16. In order to reduce the circuit used for the calculation, it may be a power of 2 or may be calculated with an arbitrary coefficient in order to improve accuracy. Further, as described below, as a weighting configuration, the weighting coefficient may be changed in units of one pixel, or a common weighting coefficient may be used in units of a plurality of pixels as shown in FIG. Furthermore, the number of corresponding pixels may be made variable according to the value of the weighting coefficient. The definition of the transfer point corresponds to a position on the main scanning line that is shifted by one pixel in the sub-scanning direction. Therefore, the reference position for interpolation is described as the main scanning start point, that is, the left end. An arithmetic expression used for the interpolation is shown in (Expression 1). x indicates the position of the target pixel in the main scanning direction, and y indicates the position of the target pixel in the sub-scanning direction. When the pixel value is p and the corrected pixel value is p ′, Equation 1 is as follows.

p ′ (x, y) = w1 × p (x, y−1) + w2 × p (x, y) + w3 × p (x, y + 1) (Formula 1)
Here, W1, W2, and W3 are weighting coefficients having a common x-coordinate, and are defined by a coefficient matrix of 3 × 16 pixels in this example as shown in FIG. The coefficient matrix in FIG. 5D is a case where the line is shifted by one line at the transfer point. The coefficients are all 0 for the line immediately above the line of interest. For the line of interest (the center line in the figure), the coefficient value decreases by 1/16 each time the pixel moves to the right from 15/16 to 0/16 (in FIG. 5D, the denominator is omitted). ). For the line immediately below the target line, the coefficient value increases by 1/16 each time the pixel moves to the right from 1/16 to 16/16. This coefficient matrix is associated with 3 × 16 pixels centered on the line of interest immediately before (on the right side of) the transfer point, and a corrected pixel value is obtained according to Equation 1. The pixel value before correction is replaced with the obtained pixel value after correction. This is performed by paying attention to all lines of the image data to be processed. Equation 1 calculates the weighted average of the pixel value of interest and its pixel value and the corresponding pixel values of the upper and lower lines.

  FIG. 5E shows a conceptual diagram of interpolation pixel values obtained by applying Equation 1 to the image data in FIG. 5B in this example. According to the interpolation of Equation 1, before the transfer point Pc, the pixel closer to the transfer point Pc is affected by the pixel value of the rear line, and the pixel farther from the transfer point Pc (left pixel) is the line of interest, that is, Strongly affected by black data lines.

  As for the pixels behind the transfer point Pc, the pixels closer to the transfer point Pc are affected by the image data of the previous line of the attention line, and the pixels farther from the transfer point Pa are affected by the rear line of the attention line. It becomes. Here, the previous line of the target line is the original target line that has become the data of the previous line due to the transfer processing step difference exceeding one pixel. In this example, since pixels other than the pixel 16 pixels before the transfer point are not subjected to the interpolation process, the image data is not smoothed.

  Next, a description will be given of the region b portion that must be corrected downward. In the case of correcting downward, the weighting coefficient used for calculating the corrected pixel value is set to the attention line and the previous line of the attention line.

  FIG. 5F shows image data at the time when the halftone processing unit 407 outputs, and FIG. 5G shows an example of image data at the time when it is read by the storage unit 408. Since the downward correction is performed at the transfer point Pa, as shown in FIG. 5G, a transfer processing step exceeding one pixel appears at the transfer point Pa as a boundary. The values of W1, W2, and W3 when performing downward correction are as shown in FIG. 5H. For convenience of explanation, the sum of weighting coefficients is set to 16 as in the upward correction processing. Even when the downward correction is performed, if Expression 1 is applied, a corrected pixel value is obtained with the transfer point Pa as a boundary. That is, before the transfer point Pa, the pixels closer to the transfer point are affected by the pixel value of the previous line, and the pixels farther from the transfer point Pc are more affected by the attention line. Further, in the pixels behind the transfer point Pa, the pixels closer to the transfer point Pc are affected by the attention line, and the pixels farther from the transfer point Pc are affected by the previous line of the attention line (FIG. 5 ( i)). However, in this example, the interpolation processing is performed on 16 pixels on the near side of the transfer point. In FIG. 5 (i), since the interval between the transfer points Pa and Pc is 16 pixels, it seems to be smoothed before and after the transfer point Pa. It will not be smoothed.

  As described above, the interpolation processing of the interpolation processing unit 412 causes the pixel data continuous in the main scanning direction to be large due to the transfer processing step exceeding one pixel regardless of whether the interpolation direction is upward or downward. Is prevented from appearing.

  Pulse width modulation (PWM) 413 converts the image data for each color output from the interpolation processing unit 412 into the exposure times of the scanner units 414C, 414M, 414Y, and 414K. The converted image data is output by the printing unit 415 of the image forming unit 401. The profile characteristic data already described with reference to FIG. 9 is held in the image forming unit 401 as characteristics of the image forming apparatus (profiles 416C, 416M, 416Y, 416K). The image processing unit 402 performs line transfer processing and interpolation processing according to the profile characteristics held by the image forming unit 401.

<Interpolation process>
Next, the most characteristic part of the present embodiment will be further described with reference to another drawing. An example of a system that performs the interpolation prohibition area determination on the image data after the halftone or scanning line transfer process, which is a feature of the present embodiment, is the system shown in the flowcharts of FIGS. These flowcharts are executed by the configuration shown in FIG.

  FIG. 10 is a flowchart for explaining processing in the halftone processing unit 407, the interpolation determination unit 409, and the interpolation processing unit 412. FIG. 11 is a detailed processing block diagram of the interpolation determination unit 409. Details of halftone processing, interpolation determination processing, and interpolation processing will be described using these drawings.

  In step S1001, the halftone processing units 407C, 407M, 407Y, and 407K perform halftone processing on the continuous-tone image data (also referred to as contone image data) 10A of M pixels for each color component image data. Here, halftone processing is performed by screen processing or error diffusion processing. As a result, halftone image data 10B quantized to N bits is output. Here, M and N are natural numbers such that M> N. In this example, screen processing using a dither matrix will be described as an example. Then, it progresses to step S1002. In this description, when there are the same components for each color component, they may be collectively referred to without adding a color component code.

  In step S <b> 1002, line transfer processing is performed by the timing adjustment unit 411 controlling the read timing at the timing when image data is read from the storage unit 408. This is a process for performing coordinate position conversion in units of one pixel at a transfer point. Subsequently, the process proceeds to step S1003.

  In step S1003, the interpolation determination units 409C, 409M, 409Y, and 409K determine an interpolation prohibition area (also referred to as an interpolation prohibition area) (that is, an interpolation prohibition area determination). Details of this processing will be described later with reference to FIG. When the determination is made, an interpolation determination result 10D describing whether to perform interpolation (hereinafter referred to as “interpolation determination flag ON”) or not (hereinafter referred to as “interpolation determination flag OFF”) is output. The process proceeds to step S1004. The interpolation prohibition area is an area where interpolation processing is not performed at the transfer point, and is an area where the interpolation determination flag is off.

  In step S1004, the following processing is performed with reference to the interpolation determination result 10D. In other words, if the determination result of the target pixel is “interpolation determination flag ON”, the process proceeds to step S1005, the interpolation processing unit 412 performs an interpolation process, and the process proceeds to the next pixel. Alternatively, when the “interpolation determination flag is OFF”, the interpolation process is not performed and the process moves to the next pixel. When the interpolation process for all pixels is completed, the image data 18E after the interpolation process is output, and the process is terminated. The determination in step S1004 is executed by the interpolation processing unit 412 that has received the interpolation determination result 10D.

<Judgment of areas where interpolation is prohibited>
Subsequently, the processing of the interpolation determination units 409C, 409M, 409Y, and 409K in step S1003 will be described in detail with reference to FIG. An interpolation determination unit 409 (which will be described collectively for each color component) receives as input the halftone image data 10C that has undergone line transfer and halftone processing. The interpolation determination unit 409 performs interpolation determination in three parts: a continuous tone image processing unit 1101, a pattern image processing unit 1102, and an isolated point image processing unit 1103.

  First, the continuous tone image processing unit 1101 will be described. The binarization unit 1104 binarizes the input image 10C. As the method, binarization with respect to a preset threshold value, binarization with an average value of surrounding pixels as a threshold value, or the like can be considered. Here, the binarized image is transferred to the dither pattern detection unit 1105.

  Subsequently, the dither pattern detection unit 1105 determines whether or not the dither pattern matches a previously registered dither pattern (dither matrix pattern). That is, the periodicity of the dither pattern is detected from the image data. As a determination method here, dither pattern detection of each color plate is performed using run-length matching, template matching, or the like. For example, when run-length matching is used, a method of determining a continuous tone image if a pixel pattern having a predetermined pattern including pixels of value 1 and value 0 (referred to as “run”) is detected a plurality of times in succession. Can be considered. In template matching, a screen pattern registered in advance for each color component and image data of the corresponding color component are subjected to pattern matching to obtain a determination result. The screen pattern is determined by a dither matrix used for screen processing. Therefore, since screen processing is normally performed using a predefined dither matrix, the screen pattern can be known in advance. The screen pattern can be stored as a screen angle, for example. The contour extraction with a constant screen angle can be performed by filtering. As a result of pattern matching, for example, if the area of the extracted continuous tone image object exceeds a certain ratio of the entire image, for example, 50%, the image is determined to be a continuous tone image. Note that such processing for determining the type of image (continuous tone image, pattern image, isolated point image) is also referred to as image attribute determination processing. As a result of the determination, when the dither pattern matches each registered color plate, the continuous tone image determination flag is set to ON in association with the position and range of the comparison target area. Alternatively, it is determined whether a transfer point is included in the image area that is the determination target. If it is included, determination is made by setting the continuous tone image determination flag to ON in association with the line identifier (for example, the line number of the image data) including the image area to be determined and the position of the transfer point. Returns the result. If the continuous tone image determination flag is associated with each pixel, a bitmap of the continuous tone image determination flag can be formed in association with each pixel of the image data. In this example, it is assumed that a bitmap is generated in this way.

  On the other hand, when run-length matching is performed, when the run reaches the transfer point, the run may be interrupted at the line transfer point even though it is originally a run. Therefore, “transfer point processing” is performed as exception processing. An example will be described below with reference to FIG. For example, consider a case where matching processing on a target line is performed as in the pattern 1310 of FIG. The run pattern is “11000” (pixels with a value of 0 are indicated by diagonal lines). Assume that a run 1301 is detected in the area 1301 before the transfer point, and the detection determination flag indicating the detection of the run is ON. In this case, for the run 1302 that crosses the transfer point, it is considered that a run has been detected regardless of the run pattern, and the continuous tone image determination flag is set to ON. The continuous tone image determination flag is stored in association with the position and range of the determination target region (run). Alternatively, it is assumed that the continuous tone image determination flags of the regions 1303 and 1305 before and after the run 1304 crossing the transfer point are ON as in the pattern 1320 of FIG. In this case, the continuous tone image determination flag of the run 1304 that crosses the transfer point is also turned ON. Such processing can be expected to improve transfer point determination accuracy. Since the transfer point is determined by profile characteristics measured in advance for each color image forming unit, whether or not the determination target run crosses the position can be determined based on the profile characteristics. The continuous tone image determination flag is stored in association with the position and range of the determination target region. In this example, bitmap data associated with each pixel is generated. When the determination of the entire image is completed, the determination result is sent to the decoder 1110. Of course, the determination is performed over the entire image data output at one time. For example, in the case of a page printer, the determination is performed on the image data of each color component constituting one page. The same applies to the pattern image processing and the isolated point image determination processing.

  Next, the pattern image processing 1102 will be described. First, binarization is performed by the binarization unit 1106. The method here is the same as the continuous tone image processing 1101 and is therefore omitted. Next, the continuous pattern detection unit 1107 determines whether or not the pattern image template matches a previously registered pattern image template. As the method, pattern image detection is performed using run-length matching or template matching. As a result, when the template matches the registered template, the pattern image determination flag associated with the determination target region is turned on and the determination result is returned.

  Here, an example in the case of using run-length matching is shown in FIG. When a run 1701 that matches the registered pattern is detected in the target line, it is determined whether the same run exists at a certain phase with respect to the target line for the upper and lower lines. FIG. 17 shows a run 1703 from the lower line in a state where the pixel phase is shifted (advanced) from the run 1701 detected on the target line, and a run 1702 from the upper line in a state where the phase of two pixels is delayed. It is a detected example. In this case, the pattern image determination flag is set to ON for the pixel corresponding to the run 1701, and the result is returned. That is, the pattern image determination flag is turned on in association with the region of the run 1701, and the result is returned. The pattern image determination flag is also generated as bitmap data associated with each pixel of the image data.

  In the pattern image processing, as in the continuous tone image processing 1101, run discontinuities may occur at transfer points. Therefore, the discontinuity of the determination result is avoided by performing the transfer point process. That is, when it is determined that the area immediately before the area including the transfer point is a pattern image, the area including the transfer point is regarded as having a predetermined pattern run. Therefore, the pattern image flag is set on in association with the area. Or when it determines with the area | region before and behind the area | region containing a transfer point being a pattern image, it considers that the area | region containing a transfer point has a predetermined pattern run. When the determination of the entire image in which the pattern image flag is set to ON in association with the area is completed, the determination result is sent to the decoder 1110.

  Next, the isolated point image determination processing 1103 will be described. First, binarization is performed by the binarization unit 1108. Here, a method such as simple binarization using a set threshold is used. Next, the isolated point detection unit 1109 determines whether or not the isolated point pattern matches a pre-registered isolated point pattern. As an example of the method, an isolated point pattern detection using template matching can be considered as an example. Several kinds of isolated point patterns as templates shown in patterns 1601 to 1608 in FIG. 16 are registered in advance. As a result of matching, when the template matches the registered template, the isolated point determination flag is turned ON for the isolated point and the pixels on the upper and lower lines (that is, the region corresponding to each template in FIG. 16), and the determination result is returned. When the determination is completed for the entire image, the determination result is sent to the decoder 1110.

  The decoder 1110 receives the above three attribute determination results and outputs a final interpolation determination result 10D. As such means, a method using a decoder LUT as shown in FIG. 14 as an example is conceivable. That is, it stores in advance whether the interpolation determination result to be output is to be turned ON / OFF depending on the combination of ON / OFF of the three attribute determination flags. Then, based on the above three attribute determination results for each pixel of the input image, an interpolation determination result is output pixel by pixel using the decoder LUT, and the process is terminated when all the pixels have been determined. For example, since the image quality of the continuous tone image is rather deteriorated when the interpolation process is performed, if only the continuous tone image determination flag is ON, the interpolation determination result is OFF. Further, for example, since the image quality is improved when the interpolation process is performed on the pattern image, if only the pattern image determination flag is ON, the interpolation determination result is ON. Since it is not meaningful to perform interpolation processing for isolated points, if only the isolated point determination flag is on, the interpolation determination result is turned off. FIG. 14 shows an example of interpolation determination.

  As a result, a bitmap of the interpolation determination flag is generated. A region where the interpolation determination flag is off is an interpolation prohibited area.

  Note that the value to be set in the decoder LUT (table in FIG. 14) varies depending on, for example, which one of the continuous tone image and the pattern image is to be prioritized. Therefore, it is possible to use a plurality of decoders LUTs for each job.

  By such a method, even if the image after halftone processing or line transfer processing is the target, it is possible to determine a place that should not be interpolated and a place that should not be interpolated, and perform an appropriate interpolation process it can.

<Determination processing of error diffused image data>
Next, a case where error diffusion processing is performed by halftone processing will be described. When error diffusion processing is performed, continuous tone images, pattern images, and isolated points cannot be determined as shown in FIG. However, if the interpolation is not performed, the rattling of the edge portion such as the patch image becomes conspicuous. Therefore, a determination process for detecting and interpolating the edge portion of the image is performed.

  FIG. 12 is a block diagram for explaining details of the interpolation prohibited area determination process when the error diffusion process is selected in the halftone process in FIG. The error diffusion image determination processing unit 1200 is applied instead of the continuous tone image determination processing unit 1101 in FIG. 11 when the halftone processing is error diffusion. The binarization unit 1201 binarizes the image data 10C after the transfer / halftone process. Since this method is the same as that of the binarization unit 1108 of the isolated point image processing 1103, description thereof is omitted. Subsequently, the process proceeds to the edge detection unit 1202. The edge detection unit 1202 performs processing for detecting an edge. In this example, when three consecutive lines are set as the target line and the edge portion of the image corresponds to the center line of the target line, the target pixel value is always other than 0, and one of the upper and lower pixels of the target pixel is 0. Take advantage of something. That is, it is determined whether the pixel at the top or bottom of the pixel of interest is 0 with the pixel at the center of the three pixels at the corresponding positions in the three consecutive lines as the pixel of interest. Then, the determination is repeated while moving the region of interest. When a certain number of consecutive areas in which either one of the upper and lower sides of the target pixel is 0 continue, the corresponding part is regarded as an edge, and the error diffusion edge determination flag is turned ON. The error diffusion edge determination flag is stored in association with the target pixel. Of course, in the continuous area, the directions of the edges must match. When all the pixels have been searched, the process proceeds to the decoder 1203.

  In the decoder 1203, the interpolation ON determination result is output to the portion where the error diffusion edge determination flag is ON. When all the pixels have been determined, the interpolation determination result 10D is output.

  By adopting such a form, the smoothness of the edge portion can be maintained even for the image data that has been subjected to the error diffusion processing. In addition, when the configuration in FIG. 12 is applied in addition to FIG. 11 or in place of the continuous tone image processing unit 1101, the decoder determines an interpolation determination flag with reference to the table in FIG.

  If the bitmap of the interpolation determination flag is generated, step S1104 and the subsequent steps in FIG. 10 are executed with reference to the bitmap, and interpolation processing is performed for a predetermined length area before and after the transfer point excluding the interpolation prohibited area. .

  With this configuration, according to the present embodiment, it is possible to eliminate the interpolation process for an area where the image quality may be deteriorated by the interpolation process. Moreover, even if the attribute information indicating the type of image is not associated with the image data, it is possible to determine the type of image, that is, the attribute from only the image data, and to determine whether or not to perform the interpolation process according to the determination result. Furthermore, since it is possible to determine the attribute even if the image data has been subjected to halftone processing, for example, for image data that has been subjected to halftone processing at the image data transmission source, an appropriate determination result is obtained and interpolation is performed. Can process.

[Modification]
In this embodiment, the continuous tone image processing unit 1101, the pattern image processing unit 1102, and the isolated point image processing unit 1103 are used as the image determination circuit. Not as long. That is, any attribute may be used as long as the system includes a plurality of attribute determination modules. Of course, a system including three or more types of determination units may be used.

  In this embodiment, pattern matching is used as an example of attribute determination means. However, other methods may be used for determination.

  In the present embodiment, two types of interpolation methods, “interpolation process determination ON” and “interpolation process determination OFF”, are described as the interpolation method. However, the interpolation level may be changed depending on the attribute. That is, not only two types of processing of whether or not to perform the interpolation processing, but also a configuration in which the interpolation intensity is changed step by step. The interpolation strength can be changed, for example, by changing the pixel range to be interpolated around the transfer point. For example, if the default value is 16 pixels before and after the transfer point, the range can be made larger (for example, 24 pixels before and after) to increase the interpolation strength. Conversely, the interpolation intensity can be lowered by reducing the range (for example, 8 pixels before and after).

  For example, it can be specified by the position of the transfer point and the line number. In other words, the interpolation prohibition area can be specified by the transfer point and the line. Therefore, the attribute determination flag may be stored only for the region (run) including the transfer point. The attribute determination flag is a generic name of a continuous tone image determination flag, a continuous pattern determination flag, an isolated point determination flag, and an error diffusion edge determination flag. This is because the interpolation process is performed on a predetermined number of pixels before and after the transfer point.

  In this case, instead of determining the value of the interpolation determination flag for all the pixels of the entire image, the determination may be limited to the area including the transfer point related to the interpolation process, for example, the pixel immediately before the transfer point. Then, for the region including the transfer point, an interpolation determination flag is determined with reference to the table of FIG. 14, and the interpolation determination flag is stored in association with the transfer point and the line.

[Second Embodiment]
As another embodiment, an example in which a process for correcting the determination result is performed before the interpolation process S1005 in FIG. 10 will be described. This is processing performed to correct erroneous determination of the result obtained in the interpolation prohibited area determination S1003 or to maintain continuity of the determination result. FIG. 18 shows a flow of interpolation processing in the present embodiment. Steps common to FIG. 10 are given common reference numerals.

  In step S1001, halftone processing is performed on the Mbit contone image data 10A at 407C, 407M, 407Y, and 407K. Here, halftone processing is performed by screen processing or error diffusion processing. Then, it progresses to step S1002.

  In step S1002, scanning line transfer processing is performed at the timing when image data is read from the storage unit 408.

  In step S1003, the interpolation determination units 409C, 409M, 409Y, and 409K determine the interpolation prohibited area. That is, if the screen process is used in the halftone process S1001, the process illustrated in FIG. 11 is performed, and if the error diffusion process is performed, the process described in FIG. 12 is performed. Subsequently, the process proceeds to step S1804.

  In step S1804, the interpolation determination result 10D generated by the interpolation determination units 409C, 409M, 409Y, and 409K is corrected. For example, when the line-by-line determination process is performed in the interpolation prohibition area determination S1003, it is necessary to check whether continuity with the upper and lower lines is maintained. If there is an erroneous determination of interpolation in the main scanning direction, it may be necessary to correct the determination result from the correlation with surrounding pixels.

  Therefore, in order to suppress variations in the interpolation determination result, AND or OR is performed with the values of the upper and lower or left and right pixels of the target pixel, and this is used as an interpolation determination flag for the target pixel. This is performed in the determination result correction processing S1804 to align. The determination result correction process is a process for unifying local interpolation determination results around the target pixel. If this process is performed on regions that overlap each other, there is a risk that the entire image will be homogenized. Therefore, for example, by setting the first pixel of interest as a pixel a predetermined number of pixels away from the transfer point, the interpolation determination flag can be obtained while eliminating the influence of pixels away from that area for the pixels in the area to be transferred. Can be unified.

  Subsequently, in step S1004, the following processing is performed with reference to the interpolation determination result 10D. In other words, if the determination result of the target pixel is “interpolation determination flag ON”, the process proceeds to step S1005, the interpolation processing unit 412 performs an interpolation process, and the process proceeds to the next pixel. Alternatively, when the “interpolation determination flag is OFF”, the interpolation process is not performed and the process moves to the next pixel. When the interpolation process for all pixels is completed, the image data 18E after the interpolation process is output, and the process is terminated.

  In this way, by performing the process of correcting the interpolation determination result, it is possible to obtain an interpolation determination result that maintains continuity.

  Note that the present invention can be applied to a system (for example, a copier, a facsimile machine, etc.) consisting of a single device even if it is applied to a system composed of a plurality of devices (eg, a host computer, interface device, reader, printer, etc.). You may apply. Another object of the present invention is to supply a recording medium recording a program code for realizing the functions of the above-described embodiments to a system or apparatus, and the system or apparatus computer reads out and executes the program code stored in the storage medium. Is also achieved. In this case, the program code itself read from the storage medium realizes the functions of the above-described embodiments, and the program code itself and the storage medium storing the program code constitute the present invention.

  Further, according to the present invention, the operating system (OS) running on the computer performs part or all of the actual processing based on the instruction of the program code, and the functions of the above-described embodiments are realized by the processing. This is also included. Furthermore, the present invention is also applied to the case where the program code read from the storage medium is written into a memory provided in a function expansion card inserted into the computer or a function expansion unit connected to the computer. In that case, based on the instruction of the written program code, the CPU of the function expansion card or the function expansion unit performs part or all of the actual processing, and the functions of the above-described embodiments are realized by the processing. .

It is a figure explaining the correction method of less than 1 pixel. It is sectional drawing which shows the structure of 4 drum color type | system | group printer parts. It is a figure which shows the profile characteristic of the scanning line for every color of an image forming apparatus. FIG. 3 is a configuration diagram of each block related to electrostatic latent image creation in the electrophotographic color image forming apparatus of the embodiment. It is a figure which shows the bending characteristic and correction method of an image forming apparatus with respect to a laser scanning direction. It is a figure which shows the bending characteristic and correction method of an image forming apparatus with respect to a laser scanning direction. It is a figure which shows an example of the curve of a scanning line. FIG. 6 is a diagram illustrating a correlation between a direction to be corrected by an image processing unit 402 based on a profile definition and a shift direction of an image forming unit 401. It is a figure which shows typically the state of the data which the memory | storage part 408 hold | maintains. It is a figure which shows the directionality of the change to the pixel position of the main scanning direction of a transfer point, and the next transfer point. It is a flowchart of the main process of 1st Embodiment. It is a block diagram which shows the detail of the interpolation prohibition area determination in step S1003. It is a figure which shows the detail of the interpolation prohibition area determination at the time of performing an error diffusion process as a halftone process. It is a figure explaining a transfer point correction process. It is a figure which shows an example of a decoder LUT. It is a figure explaining the correction method of 1 pixel unit. It is a figure which shows an example of an isolated point pattern template. It is a figure which shows an example of a pattern image determination process. It is a processing flowchart which shows 2nd Embodiment.

Explanation of symbols

11 Recording medium 21 Paper feed tray 22 Photoconductor 23 Injection charger 23S Sleeve 24 Scanner unit 26 Developer 27 Primary transfer roller 28 Intermediate transfer body 29 Secondary transfer roller 30 Cleaning means 32 Fixing roller 33 Pressure roller 401 Image forming unit 402 Image processing unit 404 Image generation unit 405 Color conversion unit 406 Bit map memory 407 Halftone processing unit 408 Second storage unit 409 Interpolation determination unit 410 Timing adjustment unit 411 Transfer buffer 412 Interpolation processing unit 413 Pulse width modulation 415 Scanner unit 416 Profile

Claims (9)

  1. An image forming unit that forms an image for each color component, and a color image forming apparatus that forms a color image by superimposing images of the respective color components,
    So as to cancel the shift amount of the sub-scanning direction of the scanning line on the image carrier in the image forming unit for each color component of the halftone image data to be processed, the scan line changing of shifting the position of the picture element in the sub-scanning direction Processing means;
    An interpolation prohibition region determination means for detecting a continuous tone image region of the halftone image data and determining the region of the continuous tone image as an interpolation prohibition region;
    Interpolation processing means for performing interpolation processing on the halftone image data excluding the interpolation-prohibited region, by performing interpolation processing for smoothing pixel-by-pixel shift caused by image data shifting by the line transfer processing means. A characteristic color image forming apparatus.
  2. The color image forming apparatus according to claim 1, wherein the interpolation prohibition region determination unit determines an isolated point image region as the interpolation prohibition region in addition to the continuous tone image region.
  3. When the dither processing is performed on the halftone image data, the interpolation prohibition area determination unit includes a dither pattern detection unit that detects periodicity of a dither pattern from the halftone image data;
    Isolated point detecting means for detecting isolated points from the halftone image data;
    Continuous pattern detection means for detecting a continuous pattern from the halftone image data,
    3. The color image forming apparatus according to claim 1, wherein the interpolation prohibition area is determined from detection results of the dither pattern detection unit, the isolated point detection unit, and the continuous pattern detection unit.
  4. The image processing apparatus further comprises determination result correcting means for correcting the determination result of the interpolation prohibited area by the interpolation prohibited area determining means at the position where the image data is shifted by the line transfer processing means based on the determination result of the surrounding pixels. The color image forming apparatus according to any one of claims 1 to 3 .
  5. The interpolation prohibition area determination unit further includes an edge detection unit that detects an edge of the image,
    Wherein when the halftone image data is image data subjected to the dither processing, to detect the dither pattern from the halftone image data by the dither pattern detecting means,
    Wherein when the halftone image data is image data subjected to the error diffusion processing, the color image forming apparatus according to claim 3, characterized in that detecting an edge from said halftone image data by said edge detection means.
  6.   The color image forming apparatus according to claim 1, wherein the interpolation processing unit changes the interpolation intensity step by step.
  7. A color image correction method in a color image forming apparatus that includes an image forming unit for forming an image for each color component and forms a color image by superimposing images of the respective color components,
    Scan line changing processing means, wherein so as to cancel the shift amount of the sub-scanning direction of the scanning line on the image carrier in an image forming unit for each color component of the halftone image data to be processed, the position of the picture element sub A line transfer process step for shifting in the scanning direction;
    An interpolation prohibition area determination means detects a continuous tone image area of the halftone image data , and determines an area of the continuous tone image as an interpolation prohibition area;
    Interpolation processing means, said generated by shifting the halftone image data by scan line changing process step, the interpolation process of smoothing the deviation of the pixel unit, subjected to the halftone image data except for the interpolation inhibited area interpolation A color image correction method comprising: a processing step.
  8. 8. The color image correction method according to claim 7, wherein, in the interpolation prohibited area determination step, an isolated point image area is determined as the interpolation prohibited area in addition to the continuous tone image area.
  9. The program for making a computer perform each process which the color image correction method of Claim 7 or 8 has.
JP2007220351A 2007-08-27 2007-08-27 Color image forming apparatus and color image correction method Active JP4966787B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007220351A JP4966787B2 (en) 2007-08-27 2007-08-27 Color image forming apparatus and color image correction method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2007220351A JP4966787B2 (en) 2007-08-27 2007-08-27 Color image forming apparatus and color image correction method
US12/186,334 US8547599B2 (en) 2007-08-27 2008-08-05 Color image forming apparatus and color image correcting method

Publications (2)

Publication Number Publication Date
JP2009055377A JP2009055377A (en) 2009-03-12
JP4966787B2 true JP4966787B2 (en) 2012-07-04

Family

ID=40407000

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007220351A Active JP4966787B2 (en) 2007-08-27 2007-08-27 Color image forming apparatus and color image correction method

Country Status (2)

Country Link
US (1) US8547599B2 (en)
JP (1) JP4966787B2 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5060189B2 (en) * 2007-07-10 2012-10-31 キヤノン株式会社 Image forming apparatus and control method thereof
JP5533481B2 (en) * 2010-09-16 2014-06-25 コニカミノルタ株式会社 Image processing apparatus and image processing method
JP5863000B2 (en) * 2011-07-22 2016-02-16 富士ゼロックス株式会社 Image processing apparatus, image forming apparatus, and program
JP5921155B2 (en) * 2011-11-15 2016-05-24 キヤノン株式会社 Image processing apparatus, image processing method, and computer program
EP2979868A4 (en) * 2013-03-27 2016-04-27 Prosper Creative Co Ltd Measuring device, measurement method, information processing device, and measurement program
JP6335013B2 (en) * 2014-04-30 2018-05-30 キヤノン株式会社 Image forming apparatus
JP6371585B2 (en) * 2014-05-22 2018-08-08 キヤノン株式会社 Image forming apparatus

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0250176A (en) * 1988-04-18 1990-02-20 Ricoh Co Ltd Recording distortion correcting device for laser printer
US5034990A (en) * 1989-09-05 1991-07-23 Eastman Kodak Company Edge enhancement error diffusion thresholding for document images
DE69220819T2 (en) * 1991-02-01 1997-11-20 Canon Kk Image processing method and apparatus
US5715070A (en) * 1994-04-28 1998-02-03 Ricoh Company, Ltd. Freely configurable image processing apparatus
JPH10341330A (en) * 1997-06-06 1998-12-22 Minolta Co Ltd Image-forming device
AUPP702498A0 (en) * 1998-11-09 1998-12-03 Silverbrook Research Pty Ltd Image creation method and apparatus (ART77)
JP3463594B2 (en) * 1999-03-02 2003-11-05 松下電器産業株式会社 Color image forming apparatus
DE60001143T2 (en) * 1999-10-29 2003-10-16 Ricoh Kk An image processing apparatus and method, and recording medium
JP2002116394A (en) 2000-10-04 2002-04-19 Canon Inc Laser writing unit
JP2003241131A (en) 2002-02-22 2003-08-27 Canon Inc Deflecting scanner and image forming device
JP2004170755A (en) 2002-11-21 2004-06-17 Canon Inc Color image forming apparatus
US7339599B2 (en) 2003-01-22 2008-03-04 Canon Kabushiki Kaisha Image-processing apparatus and method, computer program, and computer-readable storage medium for discouraging illegal copying of images
JP4393074B2 (en) 2003-01-22 2010-01-06 キヤノン株式会社 Tint block image generation apparatus, background pattern image generation method, the additional information reading apparatus, the additional information reading method
JP2005193384A (en) * 2003-12-26 2005-07-21 Ricoh Co Ltd Image processing method, apparatus, and image forming apparatus
JP4218956B2 (en) * 2004-01-29 2009-02-04 キヤノン株式会社 Image forming system, information processing apparatus and control method thereof
US7684079B2 (en) * 2004-12-02 2010-03-23 Canon Kabushiki Kaisha Image forming apparatus and its control method
EP1710999B1 (en) * 2005-04-08 2015-01-21 Canon Kabushiki Kaisha Color image forming apparatus
JP4612859B2 (en) * 2005-04-15 2011-01-12 キヤノン株式会社 Image forming apparatus and its control method, and computer program
US7344217B2 (en) * 2005-04-15 2008-03-18 Canon Kabushiki Kaisha Image forming apparatus and its control method, and computer program and computer readable storage medium
JP4817727B2 (en) * 2005-06-24 2011-11-16 キヤノン株式会社 Color image forming apparatus

Also Published As

Publication number Publication date
US8547599B2 (en) 2013-10-01
JP2009055377A (en) 2009-03-12
US20090059323A1 (en) 2009-03-05

Similar Documents

Publication Publication Date Title
US6655861B2 (en) Image processing apparatus including low-linear-density dot region detection unit, and image forming apparatus including the same
US20060256383A1 (en) Image-processing device processing image data by judging a detected and expanded medium-density field as a non-character edge field
US6965695B2 (en) Method and system for processing character edge area data
EP1736835B1 (en) Color image forming apparatus
JP4065533B2 (en) Modulation device
US8077359B2 (en) Image processing system and image forming apparatus incorporating same
US7770994B2 (en) Image forming apparatus and its control method, and computer program and computer-readable storage medium
US6816269B1 (en) Method and apparatus for electronic registration in a binary image path
US7619775B2 (en) Image forming system with density conversion based on image characteristics and amount of color shift
CN101359204B (en) Image forming apparatus and image correction method
EP1974927B1 (en) Image formation system, image formation device, and control method thereof
US7616360B2 (en) Image processing apparatus and method therefor
JP2007193143A (en) Image forming apparatus and its image forming method
US8320024B2 (en) Method and apparatus for image forming and computer program product
US20070097439A1 (en) Image processing method and apparatus thereof
JP2012061675A (en) Optical writing device, image forming apparatus, and method and program for controlling the optical writing device and recording medium
JP5533069B2 (en) Image forming apparatus, image forming method, and program
JP2007088928A (en) Image forming apparatus
JPH0939294A (en) Image recording device
US8384958B2 (en) Image forming apparatus, density-shift correction method, and computer program product
JPH11327250A (en) Image processing method, edge effect reducing method and image forming device
US7292372B2 (en) Image processing apparatus and image forming apparatus which adaptively switches image processing methods between a dot image portion and a non-dot image portion
EP1780604A2 (en) Image forming apparatus, control method for registration error correction, and program
JP4079046B2 (en) Image processing apparatus and image processing method
CN101359205B (en) Method and apparatus for forming a color image is a color image forming

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100827

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20111216

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120214

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120305

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120402

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150406

Year of fee payment: 3