US20110235127A1 - Halftone image generation, device, halftone image generation method, and computer-readable storage medium for computer program - Google Patents
Halftone image generation, device, halftone image generation method, and computer-readable storage medium for computer program Download PDFInfo
- Publication number
- US20110235127A1 US20110235127A1 US13/048,446 US201113048446A US2011235127A1 US 20110235127 A1 US20110235127 A1 US 20110235127A1 US 201113048446 A US201113048446 A US 201113048446A US 2011235127 A1 US2011235127 A1 US 2011235127A1
- Authority
- US
- United States
- Prior art keywords
- halftone image
- pixels
- image
- pixel
- halftone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/405—Halftoning, i.e. converting the picture signal of a continuous-tone original into a corresponding signal showing only two levels
- H04N1/4051—Halftoning, i.e. converting the picture signal of a continuous-tone original into a corresponding signal showing only two levels producing a dispersed dots halftone pattern, the dots having substantially the same size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/405—Halftoning, i.e. converting the picture signal of a continuous-tone original into a corresponding signal showing only two levels
- H04N1/4051—Halftoning, i.e. converting the picture signal of a continuous-tone original into a corresponding signal showing only two levels producing a dispersed dots halftone pattern, the dots having substantially the same size
- H04N1/4052—Halftoning, i.e. converting the picture signal of a continuous-tone original into a corresponding signal showing only two levels producing a dispersed dots halftone pattern, the dots having substantially the same size by error diffusion, i.e. transferring the binarising error to neighbouring dot decisions
Definitions
- the present invention relates to a device and a method for selectively blending two images processed with two different methods.
- Error diffusion processing, screen processing and the like have been proposed as processing for performing halftone conversion (dithering) on an image.
- Japanese Laid-open Patent Publication No. 2003-274173 proposes a method for making images reproduced with error diffusion processing smoother than has been possible to date.
- Japanese Laid-open Patent Publication No. 2004-179768 proposes a method for selectively using error diffusion processing and screen processing according to an image processing mode. According to the method disclosed in Japanese Laid-open Patent Publication No. 2004-179768, output resulting from error diffusion processing is selected as a default if the image processing mode is text/photo blend mode” or “text mode”. Output resulting from screen processing of 141 lpi halftone dots is selected as a default if in “photo mode” (para 0012 to 0014).
- FIG. 10 shows an example of an unnatural connection between dots.
- the present invention has been made in consideration of such problems, and has as its object to reduce any unnatural connection between dots near the boundary between two regions processed with two different methods.
- a halftone image generation device that, by partially selecting and blending a first halftone image obtained by halftone processing a specific image using a first method and a second halftone image obtained by halftone processing the specific image using a second method that differs from the first method, generates a third halftone image of the specific image, includes a detector that detects, from the first halftone image, a dot pixel in which a dot is disposed, and a blender that blends the first halftone image and the second halftone image by, in a case where a dot is not disposed in any of neighboring pixels in the first halftone image that neighbor the dot pixel, employing, as binary values of isolated-point related pixels in the third halftone image that are in a same position as the dot pixel and the neighboring pixels, binary values of the dot pixel and the neighboring pixels.
- the halftone image generation device may include a calculator that calculates a density of a prescribed range centered on each of the dot pixel and the neighboring pixels.
- the blender may blend the first halftone image and the second halftone image by, in a case where a dot is disposed in any of the neighboring pixels of the dot pixel in the first halftone image, employing, as binary values of highlight related pixels corresponding to a position of highlight pixels, which are pixels among the dot pixel and the neighboring pixels whose density of the prescribed range is less than a prescribed value, binary values of the highlight pixels.
- the blender may blend the first halftone image and the second halftone image by employing, as binary values of pixels in the third halftone image that are not one of the isolated-point related pixels or one of the highlight related pixels, binary values of pixels in the second halftone image that are in a same position as said pixels.
- the first method is a method using error diffusion processing
- the second method is a method using screen processing.
- FIG. 1 shows an example configuration of a network system having an image forming device.
- FIG. 2 shows an example hardware configuration of the image forming device.
- FIG. 3 shows an example configuration of an image processing circuit.
- FIG. 4 shows an example outline of a method for generating a blended halftone image.
- FIG. 5 shows an example positional relationship between a pixel of interest and neighboring pixels.
- FIG. 6 is for describing a method for calculating area density.
- FIGS. 7A and 7B show example area densities and comparison results.
- FIG. 8 shows an example portion of an image of a switchover region in a blended halftone image.
- FIG. 9 shows an example portion of a switchover region in an error-diffused image.
- FIG. 10 shows an example of an unnatural connection between dots.
- FIG. 1 shows an example configuration of a network system having an image forming device 1
- FIG. 2 shows an example hardware configuration of the image forming device 1
- FIG. 3 shows an example configuration of an image processing circuit 10 j.
- the image forming device 1 is an image processing device typically called a multifunction peripheral (MFP) or the like, and consolidates functions such as copy, PC print (network printing), fax and scan.
- MFP multifunction peripheral
- the image forming device 1 can, as shown in FIG. 1 , be connected to other devices such as a personal computer 2 or the like via a communication line 3 .
- the image forming device 1 is, as shown in FIG. 2 , constituted by a CPU (Central Processing Unit) 10 a , a RAM (Random Access Memory) 10 b , a ROM (Read Only Memory) 10 c , a nonvolatile memory device 10 d , an operation panel 10 e , a NIC (Network Interface Card) 10 f , a printing device 10 g , a scanner 10 h , a modem 10 i and the like.
- a CPU Central Processing Unit
- RAM Random Access Memory
- ROM Read Only Memory
- the scanner 10 h is a device that generates image data by reading an image of an original constituted by a photograph, text, graphic, chart or the like from a sheet.
- the image processing circuit 10 j performs image processing using the image data of an image of an original read by the scanner 10 h or image data transmitted from the personal computer 2 or the like. This will be discussed later.
- the printing device log prints an image that has been image processed by the image processing circuit 10 j onto a sheet.
- the operation panel 10 e is constituted by a touch panel, a group of keys and the like.
- the touch panel displays a screen for providing a message to a user, a screen showing processing results, a screen for the user to input an instruction to the image forming device 1 , and the like. Also, the touch panel detects a position that has been touched, and notifies that position to the CPU 10 a .
- the group of keys is constituted by keys such as numeric keys, a start key and a stop key. The user is able to issue commands and input data to the image forming device 1 by operating the operation panel 10 e.
- the NIC 10 f communicates with other devices such as the personal computer 2 and the like using TCP/IP (Transmission Control Protocol/Internet Protocol) via a so-called LAN (Local Area Network) line or the like.
- TCP/IP Transmission Control Protocol/Internet Protocol
- LAN Local Area Network
- the modem 10 i communicates with another fax terminal using the G3 protocol via a fixed telephone network.
- the nonvolatile memory device 10 d is a nonvolatile recording device.
- a hard disk, an SSD (Solid State Drive), a flash memory or the like is used as the nonvolatile memory device 10 d.
- the image processing circuit 10 j is constituted by an error diffusion processing portion 101 , a screen processing portion 102 , a dot monitoring portion 121 , an area density calculation portion 122 , a density threshold value storage portion 123 , a density comparison portion 124 , an AND gate 125 , an expansion processing portion 126 , an OR gate 127 , an MPX (multiplexer) 131 and the like.
- the portions of the image processing circuit 10 j are realized by circuits such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array). Alternatively, some or all of the functions of the image processing circuit 10 j may be realized by causing the CPU 10 a to execute a program. In this case, a program describing the procedures of various processing (discussed later) can be provided, and the CPU 10 a can be caused to execute this program.
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the image processing circuit 10 j uses such a configuration to perform halftone processing on an image using both error diffusion processing and screen processing.
- the processing content and the like of the portions of the image processing circuit 10 j shown in FIG. 3 will be described, taking the case where halftone processing is performed on a given original image 60 as an example.
- FIG. 4 shows an example outline of a method for generating a blended halftone image 63
- FIG. 5 shows an example positional relationship between a pixel of interest Pa and neighboring pixels Pb
- FIG. 6 is for describing a method for calculating an area density Ds
- FIGS. 7A and 7B show example area densities Ds and comparison result values HK
- FIG. 8 shows an example portion of an image in a switchover region RY 3 of the blended halftone image 63
- FIG. 9 shows an example portion of the switchover region RY 3 of an error-diffused image 61 .
- the user sets a sheet carrying an original image 60 on a platen of the scanner 10 h , and performs a prescribed operation on the operation panel 10 e.
- the scanner 10 h then generates image data 70 by reading the original image 60 from the sheet set on the platen.
- the image data 70 is updated following image processing on the original image 60 by the portions of the image processing circuit 10 j shown in FIG. 3 .
- the error diffusion processing portion 101 performs dithering (halftone processing) on the original image 60 using an error diffusion method.
- dithering halftone processing
- the original image 60 dithered by the error diffusion processing portion 101 will be described as “error-diffused image 61 ”.
- the screen processing portion 102 dithers the original image 60 by performing screen processing.
- the screen processing portion 102 dithers the original image 60 using a screen composed of various shapes such as dots, lines, or mesh.
- the original image 60 dithered by the screen processing portion 102 will be referred to as “screen-processed image 62 ”.
- the error-diffused image 61 and the screen-processed image 62 are used in order to generate a blended halftone image 63 .
- the error-diffused image 61 and the screen-processed image 62 are selectively used depending on the attributes of regions in the original image 60 or the like.
- the error-diffused image 61 is used for a region whose density gradation is low (highlight region).
- the screen-processed image 62 is used for a region whose density gradation is high.
- the image forming device 1 uses the error-diffused image 61 for a region in which the density gradation is less than ⁇ (hereinafter, described as “highlight region RY 1 ”), and uses the screen-processed image 62 for a region in which the density gradation is greater than or equal to ⁇ (hereinafter, described as “high density region RY 2 ”). Note that ⁇ . And for a region whose gradation is greater than or equal to ⁇ and less than ⁇ (hereinafter, described as “switchover region RY 3 ”), the image forming device 1 uses the error-diffused image 61 and the screen-processed image 62 selectively per pixel.
- the portions from the dot monitoring portion 121 to the MPX 131 execute halftone processing on the switchover region RY 3 , by blending the error-diffused image 61 and the screen-processed image 62 .
- the dot monitoring portion 121 monitors the switchover region RY 3 in the error-diffused image 61 , and detects pixels in which dots are disposed from the switchover region RY 3 .
- a given detected pixel will be described as a “pixel of interest Pa”.
- the eight pixels neighboring the pixel of interest Pa such as shown in FIG. 5 , will be described as neighboring pixels Pb.
- the neighboring pixels Pb may be described separately as “neighboring pixel Pb 1 ” to “neighboring pixel Pb 8 ”.
- the dot monitoring portion 121 checks whether the pixel of interest Pa is an isolated point pixel.
- An “isolated point pixel” is a pixel with respect to which dots are not disposed in any of the neighboring pixels. Therefore, the dot monitoring portion 121 checks whether dots are disposed in the neighboring pixels Pb 1 to Pb 8 . The dot monitoring portion 121 discriminates the pixel of interest Pa as being an isolated point pixel if no dots whatsoever are disposed in the neighboring pixels Pb, and discriminates the pixel of interest Pa as not being an isolated point pixel if even one dot is disposed in the neighboring pixels Pb.
- the dot monitoring portion 121 then outputs “1” as an isolated point discrimination result value SH if the pixel of interest Pa is discriminated as being an isolated point pixel, and outputs “0” as the isolated point discrimination result value SH if the pixel of interest Pa is discriminated as not being an isolated point pixel.
- the area density calculation portion 122 calculates an area density Ds for each of the pixel of interest Pa and the neighboring pixels Pb.
- the “area density Ds” is the density of a prescribed range centered on that pixel.
- the area densities Ds of the pixel of interest Pa and the neighboring pixels Pb are output to the density comparison portion 124 .
- the density comparison portion 124 compares the area density Ds of each of the pixel of interest Pa and the neighboring pixels Pb 1 to Pb 8 with a density threshold value Dp. The density comparison portion 124 then outputs “1” as the comparison result value HK for pixels whose area density Ds is less than the density threshold value Dp, and outputs “0” as the comparison result value HK for pixels for which this is not the case. Note that the density threshold value Dp is prestored in the density threshold value storage portion 123 .
- the respective comparison result values HK of the pixel of interest Pa and the neighboring pixels Pb 1 to Pb 8 are all output to the OR gate 127 .
- only the comparison result value HK of the pixel of interest Pa is output to the AND gate 125 .
- the AND gate 125 calculates a logical AND RS of the isolated point discrimination result value SH output from the dot monitoring portion 121 and the comparison result value HK output from the density comparison portion 124 .
- the logical AND RS will be “1”. If this is not the case, the logical AND RS will be “0”. The logical AND RS is output to the expansion processing portion 126 .
- the expansion processing portion 126 expands the logical AND RS output from the AND gate 125 into a 3 ⁇ 3 matrix KX.
- the expansion processing portion 126 expands the logical AND RS output from the AND gate 125 into a matrix KX in which all of the 3 ⁇ 3 elements are “1” if the logical AND RS is “1”, and expands the logical AND RS output from the AND gate 125 into a matrix KX in which all of the 3 ⁇ 3 elements are “0” if the logical AND RS is “0”.
- the element in the center of the matrix KX corresponds to the pixel of interest Pa, and the other elements correspond to the neighboring pixels Pb whose positional relationship with the pixel of interest Pa is the same.
- the upper left element corresponds to the neighboring pixel Pb 1 .
- the lower right element corresponds to the neighboring pixel Pb 8 .
- the matrix KX is output to the OR gate 127 .
- the OR gate 127 calculates the logical AND of the comparison result value HK of each of the pixel of interest Pa and the neighboring pixels Pb 1 to Pb 8 output from the density comparison portion 124 and the value of the element corresponding to each pixel shown in the matrix KX output from the expansion processing portion 126 , as per the following equation (1).
- HKa is the comparison result value HK of the pixel of interest Pa.
- HKb 1 to HKb 8 are respectively the comparison result values HK of the neighboring pixels Pb 1 to Pb 8 .
- Ya is the element, shown in the matrix KX, corresponding to the pixel of interest Pa.
- Yb 1 to Yb 8 are respectively the elements, shown in the matrix KX, corresponding to the neighboring pixels Pb 1 to Pb 8 .
- RWa is the logical AND for the pixel of interest Pa.
- RWb 1 to RWb 8 are respectively the logical ANDs for the neighboring pixels Pb 1 to Pb 8 .
- Equation (1) if the pixel of interest Pa is an isolated point pixel, the logical ANDs RWa and RWb 1 to RWb 8 will all be “1”, irrespective of the results of the comparisons by the density comparison portion 124 , that is, the comparison result values HK. On the other hand, if the pixel of interest Pa is not an isolated point pixel, the logical ANDs RWa and RWb 1 to RWb 8 will equal the respective comparison result values HK of the pixel of interest Pa and the neighboring pixels Pb 1 to Pb 8 .
- the MPX 131 selects the binary value of each pixel in the switchover region RY 3 of the blended halftone image 63 from pixels in the error-diffused image 61 or from pixels in the screen-processed image 62 in the following manner.
- the MPX 131 selects the binary values of the pixel of interest Pa and the neighboring pixels Pb in the following manner. For a pixel whose logical AND RW is “1”, the MPX 131 selects the binary value of that pixel in the error-diffused image 61 .
- the MPX 131 selects the binary value of the pixel of interest Pa, as the binary value of the pixel in the blended halftone image 63 that is in the same position as the pixel of interest Pa.
- the MPX 131 selects the binary value of the neighboring pixel Pbi, as the binary value of the pixel in the blended halftone image 63 that is in the same position as the neighboring pixel Pbi.
- the MPX 131 selects the binary value of the pixel in the screen-processed image 62 that is in the same position as that pixel.
- the MPX 131 retrieves the pixel in the screen-processed image 62 that is in the same position as the pixel of interest Pa, and selects the binary value of the retrieved pixel as the binary value of the pixel in the blended halftone image 63 that is the same position as the pixel of interest Pa.
- the MPX 131 retrieves the pixel in the screen-processed image 62 that is in the same position as the neighboring pixel Pbi, and selects the binary value of the retrieved pixel as the binary value of the pixel in the blended halftone image 63 that is the same position as the neighboring interest Pbi.
- the binary values (indicating the presence or absence of dots) of the pixels in the blended halftone image 63 that are in the same position as the pixel of interest Pa and the neighboring pixels Pbl to Pb 8 are thus determined.
- the dot monitoring portion 121 monitors the presence or absence of dots and detects a pixel in which a dot is disposed as a pixel of interest Pa similarly for the remaining pixels in the switchover region RY 3 of the error-diffused image 61 .
- the portions from the area density calculation portion 122 to the MPX 131 then perform the abovementioned processing on the newly detected pixel of interest Pa and the eight pixels Pb neighboring the pixel of interest Pa.
- the binary value of the pixel in the blended halftone image 63 that is in the same position as each of the pixel of interest Pa and the neighboring pixels Pb is then selected from one of the error-diffused image 61 and the screen-processed image 62 .
- the binary value of the pixel in the blended halftone image 63 that is in the same position as each of the pixels (i.e., pixels of interest Pa) in which a dot appears in the switchover region RY 3 of the error-diffused image 61 and the eight pixels neighboring thereto (neighboring pixels Pb) is determined.
- the MPX 131 selects the binary value of pixels in the screen-processed image 62 that are in the same position as those remaining pixels.
- the MPX 131 can, with regard to the switchover region RY 3 , be said to blend the error-diffused image 61 and the screen-processed image 62 in the manner described in (a) and (b) below.
- the MPX 131 bases the blended halftone image 63 on the group of pixels in the switchover region RY 3 of the screen-processed image 62 .
- the MPX 131 erases the binary values of pixels in the base group of pixels that are in the same position as the replacement target pixels, and substitutes the binary values of pixels in the error-diffused image 61 that are in the same position as the replacement target pixels.
- the above method enables an unnatural connection of the dots such as illustrated in FIG. 10 to be corrected as shown in FIG. 8 .
- the MPX 131 selects, as the binary value of pixels in the highlight region RY 1 of the blended halftone image 63 , the binary values of the pixels in the error-diffused image 61 that are in the same position. Similarly, the MPX 131 selects, as the binary values of pixels in the high density region RY 2 of the blended halftone image 63 , the binary values of the pixels in the screen-processed image 62 that are in the same position.
- the binary values (indicating the presence or absence of dots) of pixels in the highlight region RY 1 , the high density region RY 2 and the switchover region RY 3 are thus selected.
- the binary values of the pixels are then output to a halftone image data generation portion 132 .
- the halftone image data generation portion 132 generates halftone image data 71 indicating the binary values of the pixels selected by the MPX 131 .
- the halftone image data 71 is output to the printing device 10 g .
- the blended halftone image 63 is then printed onto a sheet by the printing device 10 g .
- the halftone image data 71 is transmitted to the personal computer 2 or the like by the NIC 10 f.
- any one pixel may constitute a pixel of interest Pa or a neighboring pixel Pb a plurality of times.
- the pixel P(2,2) shown in FIG. 9 is treated as a pixel of interest Pa because of having a dot disposed therein, and is further treated as a neighboring pixel Pb when the pixel P(3,2) is the pixel of interest Pa.
- the pixel P(4,2) is treated as a neighboring pixel Pb when the pixel P(3,2), which is not an isolated point pixel, is the pixel of interest Pa, and is further treated as a neighboring pixel Pb when the pixel P(5,3), which is an isolated point pixel, is the pixel of interest Pa.
- the pixel P(3,2) is treated as a pixel of interest Pa because of having a dot disposed therein, and is further treated as a neighboring pixel Pb when the pixel P(2,2), which is not an isolated point pixel, is the pixel of interest Pa.
- the pixel P(4,4) is treated as a neighboring pixel Pb when the pixel P(3,5) and the pixel P(5,3), which are isolated point pixels, are respectively the pixel of interest Pa.
- the binary value of a pixel that thus constitutes a pixel of interest Pa or a neighboring pixel Pb a plurality of times is selected more than once by the MPX 131 .
- the MPX 131 can determine the binary value of the pixel in the blended halftone image 63 that is the same position as that pixel in the following manner, for example, and output the determined binary value to the halftone image data generation portion 132 .
- the MPX 131 determines and outputs the binary value as “0”. If all of the plurality of selection results are “1”, the MPX 131 determines and outputs the binary value as “1”. Alternatively, the MPX 131 may determine and output whichever of “0” or “1” occurs the most as the binary value.
- the present embodiment enables an unnatural connection between dots near the boundary between a region that has undergone error diffusion processing and a region that has undergone screen processing, that is, between dots in a switchover region RY 3 , to be reduced to a greater extent than has been possible to date.
- an error-diffused image and a screen processed image are blended, but the present invention can also be applied to the case where halftone images generated by processing other than error diffusion processing or screen processing are blended.
- the thresholds ⁇ and ⁇ can be arbitrarily set. If, in the case of 256 gradations, the threshold ⁇ is set to “0” and the threshold ⁇ is set to “255”, the blended halftone image 63 can be generated by performing processing to blend the error-diffused image 61 and the screen-processed image 62 with respect to the entire area of the original image 60 .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color, Gradation (AREA)
Abstract
Description
- This application is based on Japanese patent application No. 2010-066878 filed on Mar. 23, 2010, the contents of which are hereby incorporated by reference.
- 1. Field of the Invention
- The present invention relates to a device and a method for selectively blending two images processed with two different methods.
- 2. Description of the Related Art
- Error diffusion processing, screen processing and the like have been proposed as processing for performing halftone conversion (dithering) on an image.
- Reproducing images with error diffusion processing results in greater sharpness than when images are reproduced with screen processing, although graininess increases.
- In view of this, Japanese Laid-open Patent Publication No. 2003-274173 proposes a method for making images reproduced with error diffusion processing smoother than has been possible to date.
- Also, Japanese Laid-open Patent Publication No. 2004-179768 proposes a method for selectively using error diffusion processing and screen processing according to an image processing mode. According to the method disclosed in Japanese Laid-open Patent Publication No. 2004-179768, output resulting from error diffusion processing is selected as a default if the image processing mode is text/photo blend mode” or “text mode”. Output resulting from screen processing of 141 lpi halftone dots is selected as a default if in “photo mode” (para 0012 to 0014).
-
FIG. 10 shows an example of an unnatural connection between dots. - However, it may be desired to divide a single image into a plurality of regions, and reproduce some of the regions by error diffusion processing and reproduce the remaining regions by screen processing. In such a case, an unnatural connection between dots such as shown in
FIG. 10 may appear near the boundary between an error-diffused region and a screen-processed region. Such an unnatural connection between dots produces image unevenness at the boundary portion, and is a cause of image degradation. - The present invention has been made in consideration of such problems, and has as its object to reduce any unnatural connection between dots near the boundary between two regions processed with two different methods.
- According to an aspect of the present invention, a halftone image generation device that, by partially selecting and blending a first halftone image obtained by halftone processing a specific image using a first method and a second halftone image obtained by halftone processing the specific image using a second method that differs from the first method, generates a third halftone image of the specific image, includes a detector that detects, from the first halftone image, a dot pixel in which a dot is disposed, and a blender that blends the first halftone image and the second halftone image by, in a case where a dot is not disposed in any of neighboring pixels in the first halftone image that neighbor the dot pixel, employing, as binary values of isolated-point related pixels in the third halftone image that are in a same position as the dot pixel and the neighboring pixels, binary values of the dot pixel and the neighboring pixels.
- Preferably, the halftone image generation device may include a calculator that calculates a density of a prescribed range centered on each of the dot pixel and the neighboring pixels. The blender may blend the first halftone image and the second halftone image by, in a case where a dot is disposed in any of the neighboring pixels of the dot pixel in the first halftone image, employing, as binary values of highlight related pixels corresponding to a position of highlight pixels, which are pixels among the dot pixel and the neighboring pixels whose density of the prescribed range is less than a prescribed value, binary values of the highlight pixels.
- Further, the blender may blend the first halftone image and the second halftone image by employing, as binary values of pixels in the third halftone image that are not one of the isolated-point related pixels or one of the highlight related pixels, binary values of pixels in the second halftone image that are in a same position as said pixels.
- For example, the first method is a method using error diffusion processing, and the second method is a method using screen processing.
- These and other characteristics and objects of the present invention will become more apparent by the following descriptions of preferred embodiments with reference to drawings.
-
FIG. 1 shows an example configuration of a network system having an image forming device. -
FIG. 2 shows an example hardware configuration of the image forming device. -
FIG. 3 shows an example configuration of an image processing circuit. -
FIG. 4 shows an example outline of a method for generating a blended halftone image. -
FIG. 5 shows an example positional relationship between a pixel of interest and neighboring pixels. -
FIG. 6 is for describing a method for calculating area density. -
FIGS. 7A and 7B show example area densities and comparison results. -
FIG. 8 shows an example portion of an image of a switchover region in a blended halftone image. -
FIG. 9 shows an example portion of a switchover region in an error-diffused image. -
FIG. 10 shows an example of an unnatural connection between dots. -
FIG. 1 shows an example configuration of a network system having animage forming device 1,FIG. 2 shows an example hardware configuration of theimage forming device 1, andFIG. 3 shows an example configuration of animage processing circuit 10 j. - The
image forming device 1 is an image processing device typically called a multifunction peripheral (MFP) or the like, and consolidates functions such as copy, PC print (network printing), fax and scan. - The
image forming device 1 can, as shown inFIG. 1 , be connected to other devices such as apersonal computer 2 or the like via acommunication line 3. - Apart from the
image processing circuit 10 j, theimage forming device 1 is, as shown inFIG. 2 , constituted by a CPU (Central Processing Unit) 10 a, a RAM (Random Access Memory) 10 b, a ROM (Read Only Memory) 10 c, anonvolatile memory device 10 d, anoperation panel 10 e, a NIC (Network Interface Card) 10 f, aprinting device 10 g, ascanner 10 h, amodem 10 i and the like. - The
scanner 10 h is a device that generates image data by reading an image of an original constituted by a photograph, text, graphic, chart or the like from a sheet. - The
image processing circuit 10 j performs image processing using the image data of an image of an original read by thescanner 10 h or image data transmitted from thepersonal computer 2 or the like. This will be discussed later. - The printing device log prints an image that has been image processed by the
image processing circuit 10 j onto a sheet. - The
operation panel 10 e is constituted by a touch panel, a group of keys and the like. The touch panel displays a screen for providing a message to a user, a screen showing processing results, a screen for the user to input an instruction to theimage forming device 1, and the like. Also, the touch panel detects a position that has been touched, and notifies that position to theCPU 10 a. The group of keys is constituted by keys such as numeric keys, a start key and a stop key. The user is able to issue commands and input data to theimage forming device 1 by operating theoperation panel 10 e. - The NIC 10 f communicates with other devices such as the
personal computer 2 and the like using TCP/IP (Transmission Control Protocol/Internet Protocol) via a so-called LAN (Local Area Network) line or the like. - The
modem 10 i communicates with another fax terminal using the G3 protocol via a fixed telephone network. - The
nonvolatile memory device 10 d is a nonvolatile recording device. A hard disk, an SSD (Solid State Drive), a flash memory or the like is used as thenonvolatile memory device 10 d. - Apart from an OS (Operating System), programs such as firmware and applications are stored in the
ROM 10 c or thenonvolatile memory device 10 d. These programs are loaded into theRAM 10 b and executed by theCPU 10 a as necessary. - The
image processing circuit 10 j, as shown inFIG. 3 , is constituted by an errordiffusion processing portion 101, ascreen processing portion 102, adot monitoring portion 121, an areadensity calculation portion 122, a density thresholdvalue storage portion 123, adensity comparison portion 124, anAND gate 125, anexpansion processing portion 126, anOR gate 127, an MPX (multiplexer) 131 and the like. - The portions of the
image processing circuit 10 j are realized by circuits such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array). Alternatively, some or all of the functions of theimage processing circuit 10 j may be realized by causing theCPU 10 a to execute a program. In this case, a program describing the procedures of various processing (discussed later) can be provided, and theCPU 10 a can be caused to execute this program. - The
image processing circuit 10 j uses such a configuration to perform halftone processing on an image using both error diffusion processing and screen processing. Hereinafter, the processing content and the like of the portions of theimage processing circuit 10 j shown inFIG. 3 will be described, taking the case where halftone processing is performed on a givenoriginal image 60 as an example. -
FIG. 4 shows an example outline of a method for generating a blendedhalftone image 63,FIG. 5 shows an example positional relationship between a pixel of interest Pa and neighboring pixels Pb,FIG. 6 is for describing a method for calculating an area density Ds,FIGS. 7A and 7B show example area densities Ds and comparison result values HK,FIG. 8 shows an example portion of an image in a switchover region RY3 of the blendedhalftone image 63, andFIG. 9 shows an example portion of the switchover region RY3 of an error-diffusedimage 61. - The user sets a sheet carrying an
original image 60 on a platen of thescanner 10 h, and performs a prescribed operation on theoperation panel 10 e. - The
scanner 10 h then generatesimage data 70 by reading theoriginal image 60 from the sheet set on the platen. Theimage data 70 is updated following image processing on theoriginal image 60 by the portions of theimage processing circuit 10 j shown inFIG. 3 . - The error
diffusion processing portion 101 performs dithering (halftone processing) on theoriginal image 60 using an error diffusion method. Hereinafter, theoriginal image 60 dithered by the errordiffusion processing portion 101 will be described as “error-diffusedimage 61”. - The
screen processing portion 102 dithers theoriginal image 60 by performing screen processing. In other words, thescreen processing portion 102 dithers theoriginal image 60 using a screen composed of various shapes such as dots, lines, or mesh. Hereinafter, theoriginal image 60 dithered by thescreen processing portion 102 will be referred to as “screen-processedimage 62”. - The error-diffused
image 61 and the screen-processedimage 62 are used in order to generate a blendedhalftone image 63. At this time, the error-diffusedimage 61 and the screen-processedimage 62 are selectively used depending on the attributes of regions in theoriginal image 60 or the like. For example, the error-diffusedimage 61 is used for a region whose density gradation is low (highlight region). On the other hand, the screen-processedimage 62 is used for a region whose density gradation is high. - However, when the error-diffused
image 61 and the screen-processedimage 62 are selectively used according to the gradation level, the proximity of the boundary between a highlight region and other regions appears unnatural, as was described in the Description of the Related Art. - In view of this, as shown in
FIG. 4 , theimage forming device 1 uses the error-diffusedimage 61 for a region in which the density gradation is less than α (hereinafter, described as “highlight region RY1”), and uses the screen-processedimage 62 for a region in which the density gradation is greater than or equal to β (hereinafter, described as “high density region RY2”). Note that α<β. And for a region whose gradation is greater than or equal to α and less than β (hereinafter, described as “switchover region RY3”), theimage forming device 1 uses the error-diffusedimage 61 and the screen-processedimage 62 selectively per pixel. - The portions from the
dot monitoring portion 121 to theMPX 131 execute halftone processing on the switchover region RY3, by blending the error-diffusedimage 61 and the screen-processedimage 62. - The
dot monitoring portion 121 monitors the switchover region RY3 in the error-diffusedimage 61, and detects pixels in which dots are disposed from the switchover region RY3. Hereinafter, a given detected pixel will be described as a “pixel of interest Pa”. The eight pixels neighboring the pixel of interest Pa, such as shown inFIG. 5 , will be described as neighboring pixels Pb. Also, the neighboring pixels Pb may be described separately as “neighboring pixel Pb1” to “neighboring pixel Pb8”. - Further, the
dot monitoring portion 121 checks whether the pixel of interest Pa is an isolated point pixel. An “isolated point pixel” is a pixel with respect to which dots are not disposed in any of the neighboring pixels. Therefore, thedot monitoring portion 121 checks whether dots are disposed in the neighboring pixels Pb1 to Pb8. Thedot monitoring portion 121 discriminates the pixel of interest Pa as being an isolated point pixel if no dots whatsoever are disposed in the neighboring pixels Pb, and discriminates the pixel of interest Pa as not being an isolated point pixel if even one dot is disposed in the neighboring pixels Pb. - The
dot monitoring portion 121 then outputs “1” as an isolated point discrimination result value SH if the pixel of interest Pa is discriminated as being an isolated point pixel, and outputs “0” as the isolated point discrimination result value SH if the pixel of interest Pa is discriminated as not being an isolated point pixel. - The area
density calculation portion 122 calculates an area density Ds for each of the pixel of interest Pa and the neighboring pixels Pb. The “area density Ds” is the density of a prescribed range centered on that pixel. - For example, in the case of calculating the area density Ds of a given pixel P(X,Y) with the prescribed range being a rectangular area of 5×5 pixels, the area
density calculation portion 122 calculates the average density of 5×5 pixels centered on a pixel P(X,Y) such as shown inFIG. 6 . If, in this case, the density is represented by 4-bit values (i.e., 256 gradations), and dots are disposed in seven of the 25 pixels, the areadensity calculation portion 122 calculates the area density Ds of the pixel P(X,Y) by performing the operation Ds=255×7/25=71.4. - The area densities Ds of the pixel of interest Pa and the neighboring pixels Pb are output to the
density comparison portion 124. - The
density comparison portion 124 compares the area density Ds of each of the pixel of interest Pa and the neighboring pixels Pb1 to Pb8 with a density threshold value Dp. Thedensity comparison portion 124 then outputs “1” as the comparison result value HK for pixels whose area density Ds is less than the density threshold value Dp, and outputs “0” as the comparison result value HK for pixels for which this is not the case. Note that the density threshold value Dp is prestored in the density thresholdvalue storage portion 123. - For example, in the case where the respective area densities Ds for the pixel of interest Pa and the neighboring pixels Pb1 to Pb8 are as shown in
FIG. 7A , and the density threshold value Dp is “35”, values such as shown inFIG. 7B are obtained as the comparison result values HK of the pixels. - The respective comparison result values HK of the pixel of interest Pa and the neighboring pixels Pb1 to Pb8 are all output to the
OR gate 127. On the other hand, only the comparison result value HK of the pixel of interest Pa is output to the ANDgate 125. - The AND
gate 125 calculates a logical AND RS of the isolated point discrimination result value SH output from thedot monitoring portion 121 and the comparison result value HK output from thedensity comparison portion 124. - In the case where a dot is disposed in the pixel of interest Pa and the area density Ds of the pixel of interest Pa is less than the density threshold value Dp, the logical AND RS will be “1”. If this is not the case, the logical AND RS will be “0”. The logical AND RS is output to the
expansion processing portion 126. - The
expansion processing portion 126 expands the logical AND RS output from the ANDgate 125 into a 3×3 matrix KX. In other words, theexpansion processing portion 126 expands the logical AND RS output from the ANDgate 125 into a matrix KX in which all of the 3×3 elements are “1” if the logical AND RS is “1”, and expands the logical AND RS output from the ANDgate 125 into a matrix KX in which all of the 3×3 elements are “0” if the logical AND RS is “0”. - The element in the center of the matrix KX corresponds to the pixel of interest Pa, and the other elements correspond to the neighboring pixels Pb whose positional relationship with the pixel of interest Pa is the same. For example, the upper left element corresponds to the neighboring pixel Pb1. Similarly, the lower right element corresponds to the neighboring pixel Pb8. The matrix KX is output to the
OR gate 127. - The OR
gate 127 calculates the logical AND of the comparison result value HK of each of the pixel of interest Pa and the neighboring pixels Pb1 to Pb8 output from thedensity comparison portion 124 and the value of the element corresponding to each pixel shown in the matrix KX output from theexpansion processing portion 126, as per the following equation (1). -
- Note that HKa is the comparison result value HK of the pixel of interest Pa. HKb1 to HKb8 are respectively the comparison result values HK of the neighboring pixels Pb1 to Pb8. Ya is the element, shown in the matrix KX, corresponding to the pixel of interest Pa. Yb1 to Yb8 are respectively the elements, shown in the matrix KX, corresponding to the neighboring pixels Pb1 to Pb8. RWa is the logical AND for the pixel of interest Pa. RWb1 to RWb8 are respectively the logical ANDs for the neighboring pixels Pb1 to Pb8.
- According to Equation (1), if the pixel of interest Pa is an isolated point pixel, the logical ANDs RWa and RWb1 to RWb8 will all be “1”, irrespective of the results of the comparisons by the
density comparison portion 124, that is, the comparison result values HK. On the other hand, if the pixel of interest Pa is not an isolated point pixel, the logical ANDs RWa and RWb1 to RWb8 will equal the respective comparison result values HK of the pixel of interest Pa and the neighboring pixels Pb1 to Pb8. - The
MPX 131 selects the binary value of each pixel in the switchover region RY3 of the blendedhalftone image 63 from pixels in the error-diffusedimage 61 or from pixels in the screen-processedimage 62 in the following manner. - The
MPX 131 selects the binary values of the pixel of interest Pa and the neighboring pixels Pb in the following manner. For a pixel whose logical AND RW is “1”, theMPX 131 selects the binary value of that pixel in the error-diffusedimage 61. - In other words, if the logical AND RWa is “1”, the
MPX 131 selects the binary value of the pixel of interest Pa, as the binary value of the pixel in the blendedhalftone image 63 that is in the same position as the pixel of interest Pa. Similarly, if the logical AND RWbi (where i=1, 2, . . . , 8) is “1”, theMPX 131 selects the binary value of the neighboring pixel Pbi, as the binary value of the pixel in the blendedhalftone image 63 that is in the same position as the neighboring pixel Pbi. - On the other hand, if the logical AND RW is “0”, the
MPX 131 selects the binary value of the pixel in the screen-processedimage 62 that is in the same position as that pixel. - In other words, if the logical AND RWa is “0”, the
MPX 131 retrieves the pixel in the screen-processedimage 62 that is in the same position as the pixel of interest Pa, and selects the binary value of the retrieved pixel as the binary value of the pixel in the blendedhalftone image 63 that is the same position as the pixel of interest Pa. Similarly, if the logical AND RWbi is “0”, theMPX 131 retrieves the pixel in the screen-processedimage 62 that is in the same position as the neighboring pixel Pbi, and selects the binary value of the retrieved pixel as the binary value of the pixel in the blendedhalftone image 63 that is the same position as the neighboring interest Pbi. - The binary values (indicating the presence or absence of dots) of the pixels in the blended
halftone image 63 that are in the same position as the pixel of interest Pa and the neighboring pixels Pbl to Pb8 are thus determined. - The
dot monitoring portion 121 monitors the presence or absence of dots and detects a pixel in which a dot is disposed as a pixel of interest Pa similarly for the remaining pixels in the switchover region RY3 of the error-diffusedimage 61. The portions from the areadensity calculation portion 122 to theMPX 131 then perform the abovementioned processing on the newly detected pixel of interest Pa and the eight pixels Pb neighboring the pixel of interest Pa. The binary value of the pixel in the blendedhalftone image 63 that is in the same position as each of the pixel of interest Pa and the neighboring pixels Pb is then selected from one of the error-diffusedimage 61 and the screen-processedimage 62. - As a result of the above processing, the binary value of the pixel in the blended
halftone image 63 that is in the same position as each of the pixels (i.e., pixels of interest Pa) in which a dot appears in the switchover region RY3 of the error-diffusedimage 61 and the eight pixels neighboring thereto (neighboring pixels Pb) is determined. For the remaining pixels in the switchover region RY3 of the blendedhalftone image 63, theMPX 131 selects the binary value of pixels in the screen-processedimage 62 that are in the same position as those remaining pixels. - That is, the
MPX 131 can, with regard to the switchover region RY3, be said to blend the error-diffusedimage 61 and the screen-processedimage 62 in the manner described in (a) and (b) below. - (a) The
MPX 131 bases the blendedhalftone image 63 on the group of pixels in the switchover region RY3 of the screen-processedimage 62.
(b) With regard to pixels whose logical AND RW is determined to be “1” by the OR gate 127 (hereinafter, described as “replacement target pixels”), theMPX 131 erases the binary values of pixels in the base group of pixels that are in the same position as the replacement target pixels, and substitutes the binary values of pixels in the error-diffusedimage 61 that are in the same position as the replacement target pixels. - The above method enables an unnatural connection of the dots such as illustrated in
FIG. 10 to be corrected as shown inFIG. 8 . - Further, the
MPX 131 selects, as the binary value of pixels in the highlight region RY1 of the blendedhalftone image 63, the binary values of the pixels in the error-diffusedimage 61 that are in the same position. Similarly, theMPX 131 selects, as the binary values of pixels in the high density region RY2 of the blendedhalftone image 63, the binary values of the pixels in the screen-processedimage 62 that are in the same position. - The binary values (indicating the presence or absence of dots) of pixels in the highlight region RY1, the high density region RY2 and the switchover region RY3 are thus selected. The binary values of the pixels are then output to a halftone image
data generation portion 132. - The halftone image
data generation portion 132 generateshalftone image data 71 indicating the binary values of the pixels selected by theMPX 131. - The
halftone image data 71 is output to theprinting device 10 g. The blendedhalftone image 63 is then printed onto a sheet by theprinting device 10 g. Alternatively, thehalftone image data 71 is transmitted to thepersonal computer 2 or the like by theNIC 10 f. - Note that according to the above example, any one pixel may constitute a pixel of interest Pa or a neighboring pixel Pb a plurality of times.
- For example, the pixel P(2,2) shown in
FIG. 9 is treated as a pixel of interest Pa because of having a dot disposed therein, and is further treated as a neighboring pixel Pb when the pixel P(3,2) is the pixel of interest Pa. The pixel P(4,2) is treated as a neighboring pixel Pb when the pixel P(3,2), which is not an isolated point pixel, is the pixel of interest Pa, and is further treated as a neighboring pixel Pb when the pixel P(5,3), which is an isolated point pixel, is the pixel of interest Pa. The pixel P(3,2) is treated as a pixel of interest Pa because of having a dot disposed therein, and is further treated as a neighboring pixel Pb when the pixel P(2,2), which is not an isolated point pixel, is the pixel of interest Pa. The pixel P(4,4) is treated as a neighboring pixel Pb when the pixel P(3,5) and the pixel P(5,3), which are isolated point pixels, are respectively the pixel of interest Pa. - The binary value of a pixel that thus constitutes a pixel of interest Pa or a neighboring pixel Pb a plurality of times is selected more than once by the
MPX 131. In such a case, theMPX 131 can determine the binary value of the pixel in the blendedhalftone image 63 that is the same position as that pixel in the following manner, for example, and output the determined binary value to the halftone imagedata generation portion 132. - If even one of the plurality of selection results is “0”, the
MPX 131 determines and outputs the binary value as “0”. If all of the plurality of selection results are “1”, theMPX 131 determines and outputs the binary value as “1”. Alternatively, theMPX 131 may determine and output whichever of “0” or “1” occurs the most as the binary value. - The present embodiment enables an unnatural connection between dots near the boundary between a region that has undergone error diffusion processing and a region that has undergone screen processing, that is, between dots in a switchover region RY3, to be reduced to a greater extent than has been possible to date.
- In the present embodiment, an error-diffused image and a screen processed image are blended, but the present invention can also be applied to the case where halftone images generated by processing other than error diffusion processing or screen processing are blended.
- The thresholds α and β can be arbitrarily set. If, in the case of 256 gradations, the threshold α is set to “0” and the threshold β is set to “255”, the blended
halftone image 63 can be generated by performing processing to blend the error-diffusedimage 61 and the screen-processedimage 62 with respect to the entire area of theoriginal image 60. - Otherwise, the configuration, processing content, processing procedures, data structure and the like of all or individual portions of the
image forming device 1 can be appropriately modified in keeping with the spirit of the invention. - While example embodiments of the present invention have been shown and described, it will be understood that the present invention is not limited thereto, and that various changes and modifications may be made by those skilled in the art without departing from the scope of the invention as set forth in the appended claims and their equivalents.
Claims (12)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-066878 | 2010-03-23 | ||
JP2010066878A JP4983947B2 (en) | 2010-03-23 | 2010-03-23 | Halftone image generation apparatus and halftone image generation method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110235127A1 true US20110235127A1 (en) | 2011-09-29 |
US9019563B2 US9019563B2 (en) | 2015-04-28 |
Family
ID=44656168
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/048,446 Active 2033-04-01 US9019563B2 (en) | 2010-03-23 | 2011-03-15 | Halftone image generation device that generates a blended halftone image of an image by partially selecting and blending a first halftone image obtained by halftone processing the specific image using a first method and a second halftone image obtained by halftone processing the specific image using a second method, associated halftone image generation method, and computer-readable storage medium for computer program |
Country Status (2)
Country | Link |
---|---|
US (1) | US9019563B2 (en) |
JP (1) | JP4983947B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180052447A1 (en) * | 2015-04-28 | 2018-02-22 | Hewlett-Packard Development Company, L.P. | Structure using three-dimensional halftoning |
US10252513B2 (en) | 2015-04-28 | 2019-04-09 | Hewlett-Packard Development Company, L.P. | Combining structures in a three-dimensional object |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5268774A (en) * | 1991-11-27 | 1993-12-07 | Xerox Corporation | Halftoning with enhanced dynamic range and edge enhanced error diffusion |
US5805734A (en) * | 1996-04-01 | 1998-09-08 | Xerox Corporation | Hybrid imaging system |
US6118935A (en) * | 1997-04-01 | 2000-09-12 | Professional Software Technologies, Inc. | Digital halftoning combining multiple screens within a single image |
US6519367B2 (en) * | 1998-09-23 | 2003-02-11 | Xerox Corporation | Method and system for propagating selective amounts of error in a hybrid screening device |
US20050030586A1 (en) * | 2003-07-23 | 2005-02-10 | Jincheng Huang | Adaptive halftone scheme to preserve image smoothness and sharpness by utilizing X-label |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2621865B2 (en) * | 1987-05-21 | 1997-06-18 | キヤノン株式会社 | Image processing device |
JP3784537B2 (en) * | 1998-06-09 | 2006-06-14 | 株式会社リコー | Image processing device |
JP2000196875A (en) * | 1998-12-28 | 2000-07-14 | Ricoh Co Ltd | Image processor |
JP3963260B2 (en) | 2002-03-18 | 2007-08-22 | 株式会社リコー | Image processing apparatus and image forming apparatus |
JP4120328B2 (en) * | 2002-09-18 | 2008-07-16 | 富士ゼロックス株式会社 | Image processing apparatus, image processing method, and image processing program |
JP3874718B2 (en) | 2002-11-25 | 2007-01-31 | 京セラミタ株式会社 | Image processing apparatus and image forming apparatus |
JP2008087382A (en) * | 2006-10-03 | 2008-04-17 | Seiko Epson Corp | High-image-quality halftone processing |
-
2010
- 2010-03-23 JP JP2010066878A patent/JP4983947B2/en active Active
-
2011
- 2011-03-15 US US13/048,446 patent/US9019563B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5268774A (en) * | 1991-11-27 | 1993-12-07 | Xerox Corporation | Halftoning with enhanced dynamic range and edge enhanced error diffusion |
US5805734A (en) * | 1996-04-01 | 1998-09-08 | Xerox Corporation | Hybrid imaging system |
US6118935A (en) * | 1997-04-01 | 2000-09-12 | Professional Software Technologies, Inc. | Digital halftoning combining multiple screens within a single image |
US6519367B2 (en) * | 1998-09-23 | 2003-02-11 | Xerox Corporation | Method and system for propagating selective amounts of error in a hybrid screening device |
US20050030586A1 (en) * | 2003-07-23 | 2005-02-10 | Jincheng Huang | Adaptive halftone scheme to preserve image smoothness and sharpness by utilizing X-label |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180052447A1 (en) * | 2015-04-28 | 2018-02-22 | Hewlett-Packard Development Company, L.P. | Structure using three-dimensional halftoning |
US10252513B2 (en) | 2015-04-28 | 2019-04-09 | Hewlett-Packard Development Company, L.P. | Combining structures in a three-dimensional object |
US10326910B2 (en) * | 2015-04-28 | 2019-06-18 | Hewlett-Packard Development Company L.P. | Using three-dimensional threshold matrices in the production of three-dimensional objects |
Also Published As
Publication number | Publication date |
---|---|
JP4983947B2 (en) | 2012-07-25 |
US9019563B2 (en) | 2015-04-28 |
JP2011199776A (en) | 2011-10-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101223865B1 (en) | Image processing apparatus and control method of image processing apparatus | |
JP2010056802A (en) | Display controller, image forming apparatus, method for controlling display device, control program, and recording medium | |
JP6344064B2 (en) | Image processing apparatus and computer program | |
JP2009100228A (en) | Image processing apparatus and control method therefor | |
JP5825913B2 (en) | Image processing apparatus, image processing method, and program | |
US9019563B2 (en) | Halftone image generation device that generates a blended halftone image of an image by partially selecting and blending a first halftone image obtained by halftone processing the specific image using a first method and a second halftone image obtained by halftone processing the specific image using a second method, associated halftone image generation method, and computer-readable storage medium for computer program | |
US8670158B2 (en) | Image processing apparatus and method for reducing gradation level of image | |
JP3925211B2 (en) | Multifunction machine and image processing method and program thereof | |
JP4814162B2 (en) | Image processing apparatus and image processing method | |
US20180039871A1 (en) | Method for improving hybrid halftoning and apparatus therefor | |
JP2005086289A (en) | Color image processor | |
JP2006011754A (en) | Image processing device and image processing method | |
JP5446486B2 (en) | Image processing apparatus, image processing method, and program | |
JP2005277886A (en) | Image scanner | |
JP4085937B2 (en) | Color image processing device | |
JP2006203949A (en) | Color image processor | |
JP2008092323A (en) | Image processing equipment, and image reading apparatus and image forming apparatus equipped with the same | |
JP4741970B2 (en) | Image forming apparatus and image forming method | |
JP2005057598A (en) | Color image forming device and color image processor | |
JP2008011076A (en) | Image processor | |
JP2012034220A (en) | Image-reading device | |
JP4974072B2 (en) | Image processing device | |
JP2006295301A (en) | Image forming apparatus | |
JP2005184198A (en) | Image processor | |
JP2000196875A (en) | Image processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KONICA MINOLTA BUSINESS TECHNOLOGIES, INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YAMAGUCHI, TOMOHIRO;REEL/FRAME:025958/0121 Effective date: 20110303 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |