JP6303458B2 - Image processing apparatus and image processing method - Google Patents

Image processing apparatus and image processing method Download PDF

Info

Publication number
JP6303458B2
JP6303458B2 JP2013251672A JP2013251672A JP6303458B2 JP 6303458 B2 JP6303458 B2 JP 6303458B2 JP 2013251672 A JP2013251672 A JP 2013251672A JP 2013251672 A JP2013251672 A JP 2013251672A JP 6303458 B2 JP6303458 B2 JP 6303458B2
Authority
JP
Japan
Prior art keywords
color
pixel
contour
edge
determined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2013251672A
Other languages
Japanese (ja)
Other versions
JP2015109573A (en
Inventor
裕明 小平
裕明 小平
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Priority to JP2013251672A priority Critical patent/JP6303458B2/en
Publication of JP2015109573A publication Critical patent/JP2015109573A/en
Application granted granted Critical
Publication of JP6303458B2 publication Critical patent/JP6303458B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Description

  The present invention relates to an image processing apparatus and an image processing method.

In general, an image forming apparatus screen-processes image data in order to improve reproducibility of a halftone of an image to be formed.
Since an image of a character or a graphic may be jaggy on the contour by screen processing, the contour of the character or the graphic is also emphasized for the purpose of reducing jaggy.
At the time of contour emphasis, the pixel value is adjusted by setting a pixel whose difference in pixel value from surrounding pixels is larger than a threshold as a target of contour emphasis.

When an image is formed with a plurality of colors, if the threshold value is set to a constant value common to each color, an image with a sense of incongruity may be obtained due to edge enhancement.
For example, when four colors C (cyan), M (magenta), Y (yellow), and K (black) are used, the Y color has low visibility, so even if the difference in pixel values of the Y color is large, The observer often feels the difference small and does not recognize it as a contour. Since it is not recognized as a contour, contour enhancement is not necessary. However, if the threshold value is common to each color, the threshold value is lowered in accordance with other highly visible colors, so the difference in pixel values of the Y color becomes the contour. Even if the difference is not recognized, the outline is emphasized. As a result, the image becomes uncomfortable and does not correspond to the specific visual sensitivity of the observer.

In addition, in order to maintain the color tone of the contour, it is necessary to enhance the contour by adjusting the pixel values of all colors as well as the color having a large difference in pixel values. Moire may occur in an image in which characters with low visibility are superimposed on the background.
For example, in an image in which a Y-color character with low visibility is superimposed on a background of another color, the color of the character is composed of a Y color and a background color. For this reason, when the difference between the Y pixel values is equal to or greater than the threshold and the outline of the character is emphasized, the outline of the background portion that overlaps the character is also emphasized and becomes conspicuous. When the background is a halftone, the emphasized background outline interferes with the background screen formed adjacent to the character by the screen processing, and moire occurs.

In order to avoid such unintentional edge enhancement, a method for determining whether or not edge enhancement is performed according to the relative visibility of each color has been proposed (see, for example, Patent Document 1).
According to this method, the positive edge strength obtained by subtracting the pixel value of the peripheral pixel from the pixel value of the target pixel and the negative edge strength obtained by subtracting the pixel value of the target pixel from the pixel value of the peripheral pixel are determined for each color. Then, the edge intensity of each color is weighted and added according to the relative visibility of each color. It is determined whether or not the edge is emphasized by comparing the weighted positive and negative edge intensity addition values.

JP 2005-341249 A

  However, according to the above-described method, when a background of different colors but having the same specific visibility and pixel value is adjacent to the outline of a character or the like, the weighted positive / negative edge strength addition value is the same value. Therefore, it is not a target for edge enhancement. Since the contour is not emphasized, when the image of each color is a halftone, jaggy may occur in the contour due to the screen processing.

  An object of the present invention is to suppress jaggy and perform edge enhancement according to specific visual sensitivity.

According to the invention of claim 1,
The difference between the pixel values of each pixel and the surrounding pixels is calculated for each color, and an edge is detected for each color according to the result of comparing the obtained pixel value difference for each color with a threshold value determined for each color. An edge detector;
A contour determining unit that determines, as a contour, a pixel in which an edge is detected in any color by the edge detecting unit;
A contour processing unit that adjusts pixel values of all colors of the pixels determined as the contour so as to enhance the contour;
Equipped with a,
The edge detection unit detects an edge when the difference between the pixel values is larger than the threshold value,
The threshold value determined for each color is determined such that the threshold value of the color having the lowest visibility among the colors is larger than the threshold values of the other colors.
An image processing apparatus , wherein the threshold value of the color having the lowest visibility is determined according to a pixel value of each color of a background pixel adjacent to the contour and a type of a screen formed on the background. Provided.

According to invention of Claim 2,
A screen processing unit that compares the pixel value of each pixel with a threshold corresponding to the position of each pixel and obtains a binarized or multi-valued pixel value,
The contour processing unit outputs a pixel value adjusted to emphasize the contour for the pixel determined as the contour, and the screen processing for the pixel not determined as the contour. The image processing apparatus according to claim 1, wherein the pixel value obtained by the unit is output.

According to invention of Claim 3 ,
The difference between the pixel values of each pixel and the surrounding pixels is calculated for each color, and an edge is detected for each color according to the result of comparing the obtained pixel value difference for each color with a threshold value determined for each color. An edge detection process;
A contour determination step for determining, as a contour, a pixel in which an edge is detected in any color by the edge detection step;
A contour processing step of adjusting pixel values of all colors of the pixels determined as the contour so as to enhance the contour;
Only including,
The edge detection step detects an edge when the difference between the pixel values is larger than the threshold value,
The threshold value determined for each color is determined such that the threshold value of the color having the lowest visibility among the colors is larger than the threshold values of the other colors.
An image processing method , wherein the threshold value of the color having the lowest visibility is determined according to a pixel value of each color of a background pixel adjacent to the contour and a type of a screen formed on the background. Provided.

  According to the present invention, an edge can be detected by paying attention to a single color difference. If the color of the background adjacent to the contour is different from the contour, even if the specific visibility and the pixel value of the color are the same, the contour can be subjected to contour enhancement, and jaggy can be suppressed by contour enhancement. In addition, a threshold used for edge detection can be determined according to the relative visibility of each color, and contour enhancement according to the relative visibility can be performed.

2 is a functional block diagram of the image forming apparatus. FIG. It is a functional block diagram of the image processing apparatus of this Embodiment. It is a figure which shows 3x3 pixel extracted when calculating edge strength. It is a flowchart which shows the process sequence of an edge detection part and an outline determination part. (A) It is a figure which shows an original image. (B) It is a figure which shows the image by which the pixel value after the contone process was output and the outline emphasis was carried out. (D) It is a figure which shows the image by which the larger one of the pixel value after a screen process and the pixel value after a contone process is output, and the edge emphasis was carried out. An example and a comparative example of outline emphasis by an image processing device are shown. An example and a comparative example of outline emphasis by an image processing device are shown. An example and a comparative example of outline emphasis by an image processing device are shown. Each image obtained by performing contour processing on images having different Y pixel values is shown.

  Embodiments of an image processing apparatus and an image processing method of the present invention will be described below with reference to the drawings.

FIG. 1 shows the configuration of an image forming apparatus 10 equipped with the image processing apparatus G of the present embodiment for each function.
As illustrated in FIG. 1, the image forming apparatus 10 includes a control unit 11, a storage unit 12, an operation unit 13, a display unit 14, a communication unit 15, an image reading unit 16, an image generation unit 17, an image processing device G, and an image forming unit. A portion 18 is provided.

The control unit 11 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), and the like. The control unit 11 reads a program stored in the storage unit 12 and controls each unit of the image forming apparatus 10 according to the program.
For example, the control unit 11 causes the image processing apparatus G to generate bitmap format image data, and based on the image data, the image forming unit 18 forms an image on a sheet using toner.

The storage unit 12 stores programs, files, and the like that can be read by the control unit 11.
As the storage unit 12, a storage medium such as a hard disk or a ROM (Read Only Memory) can be used.

  The operation unit 13 includes an operation key, a touch panel configured integrally with the display unit 14, and the like, and outputs operation signals corresponding to these operations to the control unit 11. The user can perform input operations such as job setting and processing content change by the operation unit 13.

  The display unit 14 can be an LCD (Liquid Crystal Display) or the like, and displays an operation screen or the like according to instructions from the control unit 11.

  The communication unit 15 communicates with a computer on the network, for example, a user terminal, a server, another image forming apparatus, etc. according to an instruction from the control unit 11. For example, the communication unit 15 receives PDL (Page Description Language) data transmitted from a user terminal.

  The image reading unit 16 includes a scanner, reads an original set by a user with the scanner, and generates image data of each color of R (red), G (green), and B (blue).

The image generation unit 17 rasterizes the PDL data received by the communication unit 15 and converts the bitmap format image data into C (cyan), M (magenta), Y (yellow), and K (black) colors. To generate.
The image generation unit 17 can analyze the PDL data and generate attribute data indicating the attribute of each pixel. For example, the image generation unit 17 draws a pixel of a region to be rendered by vector data of a character as a character attribute, a pixel of a region to be rendered by vector data of a graphic such as a line or a rectangle as a graphic attribute, and renders by a file such as JPEG The pixel of the area to be used can be determined as the attribute of the photograph. The attributes are not limited to the characters, graphics, and photographs, and can be set arbitrarily.

The image generation unit 17 performs color conversion processing on the image data of each color R, G, and B obtained by the image reading unit 16 to generate image data of each color C, M, Y, and K.
The image generation unit 17 may analyze the generated image data and determine the attribute of each pixel. For example, an image region of a character or a graphic can be extracted by template matching, and the attribute of the image region can be determined as a character or a graphic.

  The image processing apparatus G performs γ correction processing, screen processing, and contour processing on the C, M, Y, and K image data input from the image generation unit 17.

The image forming unit 18 forms an image with toner on a sheet based on the image data of each color C, M, Y, and K input from the image processing apparatus G.
Specifically, the image forming unit 18 includes a transfer member, a fixing device, and the like in addition to an exposure unit, a photosensitive member, a developing unit, and the like provided for each of C, M, Y, and K colors.
The image forming unit 18 irradiates the charged photosensitive member with a laser beam modulated according to the pixel value of each pixel of the image data by the exposure unit to form an electrostatic latent image, and the developing unit applies toner. Supply and develop. The image forming unit 18 transfers the image of each color formed on each photoconductor in an overlapping manner on the transfer body, and transfers the image from the transfer body onto the sheet. The image forming unit 18 heats and pressurizes the sheet with the image transferred thereon by a fixing device, and performs a fixing process.

FIG. 2 shows the configuration of the image processing apparatus G for each function.
As shown in FIG. 2, the image processing apparatus G includes an edge detection unit 1, a contour determination unit 2, a γ correction unit 3, a screen processing unit 4, a contone processing unit 5, and a contour processing unit 6.

The edge detection unit 1 calculates the pixel value difference between each pixel of the image data and the surrounding pixels for each color, and compares the obtained pixel value difference for each color with a threshold value determined for each color. The edge is detected for each color. An edge means a place where a change in shading, brightness, or color is large.
Specifically, as illustrated in FIG. 2, the edge detection unit 1 includes an edge strength calculation unit 101 and a comparison unit 102 provided for each of C, M, Y, and K colors.

  The edge strength calculation unit 101 calculates edge strength PED [ch] for each color of C, M, Y, and K for each pixel. Ch in [] is a variable representing a color of C, M, Y, or K.

The edge strength calculation unit 101 extracts each pixel of the image data as a target pixel for each pixel, and extracts eight peripheral pixels located around the target pixel.
FIG. 3 shows a target pixel A0 and peripheral pixels A1 to A8. The pixel values of the respective colors of the pixels A0 to A8 are represented as A0 [ch] to A8 [ch].

The edge intensity calculation unit 101 subtracts the pixel values A1 [ch] to A8 [ch] of each color of the peripheral pixels A1 to A8 from the pixel value A0 [ch] of each color of the target pixel A0 as shown in the following equation. The difference E1 [ch] to E8 [ch] is obtained.
E1 [ch] = A0 [ch] -A1 [ch]
E2 [ch] = A0 [ch] -A2 [ch]
E3 [ch] = A0 [ch] -A3 [ch]
E4 [ch] = A0 [ch] -A4 [ch]
E5 [ch] = A0 [ch] -A5 [ch]
E6 [ch] = A0 [ch] -A6 [ch]
E7 [ch] = A0 [ch] -A7 [ch]
E8 [ch] = A0 [ch] -A8 [ch]

Since each difference E1 [ch] to E8 [ch] represents the strength of the edge between the target pixel and each of the surrounding pixels A1 to A8, the edge strength calculation unit 101 uses each difference E1 [ch] to E8 [ ch] can be determined as the edge strength PED [ch].
Especially, since the maximum value among the differences E1 [ch] to E8 [ch] represents the edge strength that is most easily recognized, the edge strength calculation unit 101 uses the maximum value as the edge strength PED [ch]. It is preferable to determine as
When PED [ch] satisfies PED [ch] <0, the edge strength calculation unit 101 determines PED [ch] as 0.

The comparison unit 102 compares the edge strength PED [ch] input from the edge strength calculation unit 101 of each color with a threshold Th [ch] determined for each color.
The comparison unit 102 outputs one edge signal ED [ch] to the contour determination unit 2 when the edge strength PED [ch] is larger than the threshold Th [ch]. The edge signal ED [ch] of 1 indicates that an edge is detected between the target pixel A0 and the peripheral pixels A1 to A8.

  Note that contour enhancement is effective for characters or figures that are prone to jaggy due to screen processing. Therefore, when only the characters or figures for which outline enhancement is effective are selected as the outline enhancement target, the comparison unit 102 has the edge strength PED [ch] larger than the threshold Th [ch], and the attribute data of the pixel of interest A0 is the character. Alternatively, one edge signal ED [ch] may be output in the case of indicating a graphic attribute.

  Further, the comparison unit 102 outputs the edge signal ED [ch] of 0 to the contour determination unit 2 when the edge strength PED [ch] is equal to or less than the threshold value Th [ch]. The edge signal [ch] of 0 indicates that no edge is detected between the target pixel A0 and the peripheral pixels A1 to A8.

The threshold Th [ch] for each color can be arbitrarily determined according to each color, but is preferably determined according to relative luminous sensitivity.
As a result, edge detection and contour enhancement according to specific visual acuity can be performed, and a visually natural image can be obtained.

Among them, among the colors C, M, Y, and K, the Y color threshold Th [Y] having the lowest specific visibility is more than the thresholds Th [C], Th [M], and Th [K] of other colors. It is preferable that it is determined to be larger.
For example, when the pixel value is in the range of 0 to 255, the K color threshold Th [K] with the highest specific visibility is set to 51, and the M and C color thresholds Th [M] with the next highest specific visibility are set. ] And Th [C] can be determined to 128, respectively, and the Y threshold Th [Y] having the lowest specific visibility can be determined to 200.
As a result, for a Y color with low visibility and a pixel value difference recognized as an outline larger than other colors, if there is no edge strength PED [Y] that can be recognized as an outline, the outline enhancement target Edge detection can be performed so as not to occur.

The threshold value Th [Y] of the color having the lowest visibility is preferably determined according to the pixel value of each color of the background adjacent to the contour and the type of screen formed on the background by the screen processing unit 4. .
In order to prevent a decrease in color reproducibility, the pixel values of all colors of the pixels subjected to the contour emphasis are adjusted during the contour emphasis. With this adjustment, for example, an image in which characters are superimposed on the background emphasizes not only the characters but also the background outline that overlaps the characters and is formed adjacent to the characters by screen processing. May interfere with the background screen and cause moire. In this case, by determining the threshold Th [Y] according to the background as described above, the contact between the emphasized background contour and the background screen adjacent to the contour can be reduced, and moire can be suppressed. it can.

The larger the background pixel value, the higher the density of the background screen, so that moire tends to stand out when touching the contour. Therefore, the moire caused by the contour enhancement can be suppressed by determining the threshold Th [Y] to be larger so that the background pixel value is larger than the target of the contour enhancement.
The screen types include dot screens and line screens, and the frequency of contact with the contour changes depending on the type of screen. Therefore, in the case of a screen type in which contact with the contour increases, moire caused by contour emphasis can be suppressed by determining the threshold Th [Y] to be larger than the target of contour emphasis.

  For example, when the background C color pixel value is 125 and the screen formed on the background is a 190-line dot screen, the difference between the character and background Y color pixel values is 100 or more. The Y color threshold Th [Y] can be determined to be 225 (255 = 125 + 100) so that the outline is enhanced.

The contour determination unit 2 determines a pixel in which an edge is detected in any color by the edge detection unit 1 as a contour.
Specifically, the contour determination unit 2 determines that the pixel of interest A0 is a pixel constituting the contour when any of the edge signals ED [ch] input from the comparison unit 102 of each color is 1, 1 output control signal EDCN is output to the contour processing unit 6. The output control signal EDCN of 1 indicates that the pixel of interest A0 is an outline and is an object of outline enhancement.
Further, the contour determining unit 2 determines that the pixel of interest A0 is not a pixel constituting the contour when all of the edge signals ED [ch] input from the comparing units 102 of the respective colors are 0, and the output control of 0 is performed. The signal EDCN is output to the contour processing unit 6. The output control signal EDCN of 0 indicates that the target pixel A0 is not a contour but is not subject to contour enhancement.

FIG. 4 shows a processing procedure of the edge detection unit 1 and the contour determination unit 2 described above.
When image data is input to the edge detection unit 1, the edge intensity calculation unit 101 corresponding to each color of C, M, Y, and K inputs a pixel value of each color for each pixel.
As shown in FIG. 4, the edge strength calculation unit 101 for each color calculates the edge strength PED [ch] of each input pixel (step S1). The calculated edge strength PED [ch] is output to the comparison unit 102 corresponding to each color.

The comparison unit 102 for each color compares the input edge strength PED [ch] with the threshold value Th [ch] for each color (steps S2 to S5).
When the edge intensity PED [ch] is larger than the threshold Th [ch] (steps S2; Y, S3; Y, S4; Y, S5; Y), the comparison unit 102 for each color receives one edge signal ED [ch]. It outputs to the outline determination part 2 (step S6).
On the other hand, when the edge strength PED [ch] is less than or equal to the threshold Th [ch] (steps S2; N, S3; N, S4; N, S5; N), the comparison unit 102 for each color outputs an edge signal ED [ch of 0. ] Is output to the contour determining unit 2 (step S7).

The contour determination unit 2 outputs one output control signal EDCN if any of the edge signals ED [ch] input from the comparison unit 102 for each color is 1 (step S8; N) (step S9).
On the other hand, when all the edge signals ED [ch] input from the comparison units 102 for each color are 0 (step S8; Y), the contour determination unit 2 outputs an output control signal EDCN of 0 (step S10).

The γ correction unit 3 performs γ correction processing on the image data of each color.
The γ correction unit 3 corrects the gradation value of each pixel so that the gradation characteristic of the formed image matches the target gradation characteristic. Specifically, the γ correction unit 3 includes an LUT (Look Up Table) that outputs a gradation value corrected with respect to the input gradation value, and is corrected corresponding to the gradation value of each pixel. The obtained gradation value is acquired from the LUT.

The screen processing unit 4 screen-processes the image data that has been subjected to the γ correction processing by the γ correction unit 3.
The screen processing unit 4 compares a dither matrix in which a threshold is set for each of n × m pixels with each pixel of the image data, and acquires a threshold corresponding to the position of each pixel in the dither matrix. The screen processing unit 4 binarizes the pixel value of each pixel as a maximum value if the pixel value of each pixel is greater than the acquired threshold value, and as a minimum value if the pixel value is less than or equal to the threshold value. .

  The screen processing unit 4 can also multi-value the pixel value of each pixel by using a dither matrix in which two threshold values are set for each of n × m pixels. In this case, the screen processing unit 4 sets the pixel value after the screen processing to the minimum value if the pixel value is smaller than the smaller threshold value of the two threshold values, and the maximum if the pixel value is larger than the larger threshold value. Value. If the pixel value is within the range of the two threshold values, the screen processing unit 4 sets the pixel value after the screen processing to a halftone value that is larger than the minimum value and smaller than the maximum value by linear interpolation of the two threshold values. .

  The contone processing unit 5 performs contone processing on the image data that has been subjected to the γ correction processing. The contone process is a process for outputting the image data subjected to the γ correction process to the contour processing unit 6 as it is. That is, the pixel value after contone processing of each pixel is the pixel value itself after γ correction processing.

The contour processing unit 6 performs contour processing for adjusting the pixel values of all the colors of the pixels determined as the contour by the contour determining unit 2 so as to enhance the contour.
By adjusting the pixel values of all colors, it is possible to prevent a decrease in the reproducibility of the contour color due to contour enhancement.

The pixel value after screen processing and the pixel value after contone processing of each pixel are input to the contour processing unit 6 from each of the screen processing unit 4 and the contone processing unit 5 in synchronization. In addition, the output control signal EDCN of each pixel is input from the contour determination unit 2 in synchronization with the input timing of these pixel values.
The contour processing unit 6 outputs the pixel value after the contone process for the pixel to which the output control signal EDCN of 1 is input, and performs the screen process for the pixel to which the output control signal EDCN of 0 is input. The later pixel value is output.
Since the pixel value after the screen processing is determined so that dots are formed at a predetermined interval, the pixel value may be output at the minimum value even if the original pixel value of the contour is larger than the minimum value. Therefore, by outputting the pixel value after the contone process, that is, the original pixel value as it is, the contour pixel value can be adjusted to be larger than the minimum value and the contour can be emphasized.

The contour processing unit 6 may output the larger pixel value of the pixel value after the screen processing and the pixel value after the contone processing for the pixel to which one output control signal EDCN is input. it can. Thereby, it is possible to emphasize a contour more strongly.
For example, FIG. 5A shows an original image having halftone pixel values. FIG. 5B outputs the pixel values after the contone processing for the pixels determined as the contour of the original image shown in FIG. 5A, and the pixel values after the screen processing for the other pixels. The image obtained by outputting is shown. In FIG. 5B, the pixel value after the screen processing is larger than the pixel value after the contone processing.
On the other hand, FIG. 5C shows an image obtained by outputting the larger one of the pixel value after the screen processing and the pixel value after the contone processing for the pixel determined to be the contour. Since the larger pixel value is output, the outline of the image shown in FIG. 5C can be emphasized more strongly than the image shown in FIG. 5B.

  Note that the method for emphasizing the contour is not limited to this, and the contour processing unit 6 adjusts the pixel to which the output control signal EDCN is input so as to increase the pixel value according to the pixel value of the pixel. May be.

6 to 8 show examples and comparative examples of edge enhancement by the image processing apparatus G. FIG.
FIG. 6 shows images f11 to f14 obtained by performing image processing on the original image f10 including the number 3 and the background.
The original image f10 has a pixel value range of 0 to 255, and the pixel values of the numbers C, M, Y, and K are 60, 36, 153, and 0, respectively. The background C, M, Y, and K pixel values are 128, 0, 0, and 0, respectively.

The image f11 is an image obtained by performing image processing A on the original image f10.
Image processing A is image processing in which the image processing apparatus G outputs pixel values after screen processing for all pixels without performing contour processing by the contour processing unit 6.
Since the outline is not emphasized, there is a jaggy at the boundary between the screened number and the background.

The image f12 is an image obtained by performing image processing B on the original image f10.
The image processing B is the image processing apparatus G except that the edge detection unit 1 of the image processing apparatus G detects an edge using one threshold common to all colors instead of the threshold Th [ch] for each color. Is the same image processing.
In the image processing B, when 51 is used as a threshold value common to all colors, the pixel values of the numbers C and Y are larger than the threshold value 51, so that an image f12 in which the outline of numbers is emphasized is obtained.

The image f13 is an image obtained by performing image processing C on the original image f10.
The image processing C is the same image processing as that of the image processing apparatus G except that the edge detection method by the edge detection unit 1 of the image processing apparatus G is replaced with the following detection method.

  In the image processing C, in addition to the edge strength PED [ch], the edge strength RED [ch] is obtained for each color. The edge strength RED [ch] is the difference having the largest absolute value among the differences obtained by subtracting the pixel value A0 [ch] of the target pixel A0 from the pixel values A1 [ch] to A8 [ch] of the peripheral pixels A1 to A8. It is. Next, according to the following formulas (1) and (2), the edge strengths PED [ch] and RED [ch] of each color are weighted and added according to the relative luminous sensitivity, and the total edge strength PED [All] and Obtain RED [All]. If PED [All]> RED [All] is satisfied, an edge is detected.

PED [All] = PED [C] × Wc + PED [M] × Wm
+ PED [Y] × Wy + PED [K] × Wk (1)
RED [All] = RED [C] × Wc + RED [M] × Wm
+ RED [Y] × Wy + RED [K] × Wk (2)
[In the above formulas (1) and (2), Wc, Wm, Wy, and Wk are weighting coefficients set for each of C, M, Y, and K, and satisfy Wc + Wm + Wy + Wk = 1. Wc, Wm, Wy, and Wk can be determined according to the relative visibility. ]

In image processing C, when 2/8, 2/8, 1/8, and 3/8 are used as the weighting coefficients Wc, Wm, Wy, and Wk, respectively, according to the relative visibility, PED [All] and RED [All] in the outline of numbers are as follows.
PED [All] = (60−128) × 2/8 + (36−0) × 2/8 + (153-0) × 1/8 + (0−0) × 3 / 8≈11
RED [All] = (128-60) × 2/8 + (0-36) × 2/8 + (0-153) × 1/8 + (0-0) × 3 / 8≈-11
In order to satisfy PED [All]> RED [All], an image f13 in which the outline of a numeral is emphasized is obtained.

The image f14 is an image obtained by performing image processing D on the original image f10.
The image processing D is the same image processing as that of the image processing apparatus G. In the image processing D, when 51, 51, 200, and 51 are used as the threshold values Th [C], Th [M], Th [Y], and Th [K] for each color, Therefore, the image f14 in which the outline of the numbers is emphasized is obtained.

  Comparing the image f11 with the images f12 to f14, it can be seen that by performing the contour processing, the numerical contour is enhanced and jaggy is eliminated.

FIG. 7 shows images f11 to f14 obtained by performing the image processing A to D on the original image f20 in which the numeral 4 is superimposed on the background.
The original image f20 has a pixel value range of 0 to 255, and the pixel values of the numbers C, M, Y, and K are 128, 0, 100, and 0, respectively. The background C, M, Y, and K pixel values are 228, 0, 0, and 0, respectively.

According to the image processing A, since the outline of the numeral is not emphasized, an image f21 in which jaggy occurs at the boundary between the numeral and the background is obtained. However, since the visibility of the Y color of the numeral is low, the jaggy itself is difficult to be recognized.
According to the image processing B, the difference between the number and the Y pixel value of the background is larger than the threshold value, so that the outline of the number is emphasized, but not only the outline of the number but also the outline of the background portion that overlaps the number An image f22 is obtained. The emphasized outline of the background portion that overlaps with the numeral and the background screen formed adjacent to the outline of the numeral by the screen processing periodically interfere, and moire occurs in the outline portion.

On the other hand, according to the image processing C, even if the weighting coefficient Wy of Y color is smaller than the other colors and the difference between the pixel values of the number and the Y color of the background is large, the weighting is performed according to the relative visibility. The influence on strength PED [All] and RED [All] is small. As a result, an image f23 that does not satisfy PED [All]> RED [All] and is not edge-enhanced is obtained.
Also, according to the image processing D, the Y color threshold Th [Y] is large, the difference between the number and the Y pixel value of the background is less than the threshold, and no edge is detected. It is done.

When the edge-enhanced image f22 is compared with the non-edge-enhanced images f21, f23, and f24, the difference in pixel value of Y color is not recognized as an edge. Thus, it can be seen that the image becomes uncomfortable.
Image processing C lowers the Y color weighting coefficient Wc, and image processing D increases the Y color threshold Th [Y], so that even if there is a large difference in Y color pixel values, the difference is recognized as an outline. Otherwise, the contour is adjusted so as not to be emphasized, and the contour is enhanced according to the relative visibility.

FIG. 8 shows images f31 to f34 obtained by performing the image processing A to D on the original image f30 including the number 1 image, respectively.
The original image f30 has a range of pixel values from 0 to 255, and the pixel values of the numbers C, M, Y, and K are 0, 128, 0, and 0, respectively. The background C, M, Y, and K pixel values are 128, 0, 0, and 0, respectively. The numbers and background colors are different M and C colors, respectively, but the relative visibility and pixel values of the respective colors M and C are the same.

According to the image processing A, since the outline of a number is not emphasized, an image f31 in which jaggy occurs at the boundary between the number and the background is obtained.
According to the image processing B, since the difference between the number and the M pixel value of the background exceeds the threshold value, an image f32 in which the outline of the number is emphasized is obtained.
According to the image processing C, since PED [All] = RED [All] = 0 and PED [All]> RED [All] is not satisfied, an image f33 whose contour is not emphasized is obtained. Similar to the image f31, jaggy has occurred.
According to the image processing D, since the difference between the number and the M pixel value of the background exceeds the threshold value, an image f34 in which the outline of the number is emphasized is obtained similarly to the image f32.

When the images f31 and f33 are compared with the images f32 and f34, if the C and M colors having higher relative visibility than the Y color are adjacent to each other, the jaggy generated in the contour is conspicuous. Therefore, it is preferable to suppress the jaggy by contour enhancement.
However, when colors having different specific colors and the same specific visibility and pixel value are adjacent to each other, the edge cannot be emphasized because the edge is not detected by the edge detection method of the image processing C.
On the other hand, according to the image processing D, since the edge is detected independently for each color, the edge can be detected regardless of the relationship between the outline and the background color, and the outline can be emphasized.

FIG. 9 shows the relationship between the size of the Y-color pixel value and the contour emphasis.
In FIG. 9, images f41 to f45 are images in which the number 4 is superimposed on the background, and the Y-color pixel values of the numbers are 150, 175, 200, 225, and 255, respectively. It is an image obtained by performing the image processing D. The pixel values of C, M, and K other than the Y color of the numbers of the original images are 128, 0, and 0, respectively, and the pixel values of the background C, M, Y, and K are 128, 0, 0, and 0.

Since the threshold value Th [Y] for Y color is 200, images f41 to f43 in which the outline of the number is not emphasized are obtained when the pixel value of the number Y color is 200 or less. When the pixel value of the Y color of the number exceeds 200, images f44 and f45 in which the outline of the number is emphasized are obtained.
As shown in FIG. 9, as the Y pixel value increases, the C color of the background becomes difficult to recognize. Therefore, even if the outline of the number is emphasized, the outline of the background portion that is emphasized together with the number and the outline are adjacent to each other. It can be seen that there is little interference with the background screen.

  As described above, the image processing apparatus G calculates the difference between the pixel values of each pixel and the surrounding pixels for each color, and compares the obtained pixel value difference for each color with the threshold value determined for each color. Accordingly, the edge detection unit 1 that detects an edge for each color, the edge detection unit 1 that determines a pixel in which an edge is detected in any color as the contour, and the contour is determined as the contour. A contour processing unit 6 that adjusts the pixel values of all the colors of the pixels so as to enhance the contour.

By detecting the edge for each color, it is possible to detect the edge by paying attention to the difference of a single color. If the color of the background adjacent to the contour is different from the contour, even if the specific visibility and the pixel value of the color are the same, the contour can be subjected to contour enhancement, and jaggy can be suppressed by contour enhancement.
Further, the threshold Th [ch] for each color can be determined according to the relative luminous sensitivity of each color, and contour enhancement according to the specific luminous sensitivity is possible.

The above embodiment is a preferred example of the present invention, and the present invention is not limited to this. Modifications can be made as appropriate without departing from the spirit of the present invention.
For example, when the control unit 11 reads a program, the processing content of the image processing apparatus G can be executed by the control unit 11. Further, not only the image forming apparatus 10 but also a computer such as a general-purpose PC can read the program and execute the processing contents of the image processing apparatus G.
As a computer-readable medium for the program, a non-volatile memory such as a ROM and a flash memory, and a portable recording medium such as a CD-ROM can be applied. A carrier wave is also used as a medium for providing the program data via a communication line.

DESCRIPTION OF SYMBOLS 10 Image forming apparatus 11 Control part 12 Memory | storage part G Image processing apparatus 1 Edge detection part 2 Contour determination part 3 γ correction part 4 Screen processing part 5 Contone processing part 6 Contour processing part 17 Image generation part 18 Image formation part

Claims (3)

  1. The difference between the pixel values of each pixel and the surrounding pixels is calculated for each color, and an edge is detected for each color according to the result of comparing the obtained pixel value difference for each color with a threshold value determined for each color. An edge detector;
    A contour determining unit that determines, as a contour, a pixel in which an edge is detected in any color by the edge detecting unit;
    A contour processing unit that adjusts pixel values of all colors of the pixels determined as the contour so as to enhance the contour;
    Equipped with a,
    The edge detection unit detects an edge when the difference between the pixel values is larger than the threshold value,
    The threshold value determined for each color is determined such that the threshold value of the color having the lowest visibility among the colors is larger than the threshold values of the other colors.
    The image processing apparatus according to claim 1, wherein the threshold value of the color having the lowest visibility is determined according to a pixel value of each color of a background pixel adjacent to the contour and a type of a screen formed on the background .
  2. A screen processing unit that compares the pixel value of each pixel with a threshold corresponding to the position of each pixel and obtains a binarized or multi-valued pixel value,
    The contour processing unit outputs a pixel value adjusted to emphasize the contour for the pixel determined as the contour, and the screen processing for the pixel not determined as the contour. The image processing apparatus according to claim 1, wherein the pixel value obtained by the unit is output.
  3. The difference between the pixel values of each pixel and the surrounding pixels is calculated for each color, and an edge is detected for each color according to the result of comparing the obtained pixel value difference for each color with a threshold value determined for each color. An edge detection process;
    A contour determination step for determining, as a contour, a pixel in which an edge is detected in any color by the edge detection step;
    A contour processing step of adjusting pixel values of all colors of the pixels determined as the contour so as to enhance the contour;
    Only including,
    The edge detection step detects an edge when the difference between the pixel values is larger than the threshold value,
    The threshold value determined for each color is determined such that the threshold value of the color having the lowest visibility among the colors is larger than the threshold values of the other colors.
    An image processing method , wherein the threshold value of the color having the lowest visibility is determined according to a pixel value of each color of a background pixel adjacent to the contour and a type of a screen formed on the background .
JP2013251672A 2013-12-05 2013-12-05 Image processing apparatus and image processing method Active JP6303458B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013251672A JP6303458B2 (en) 2013-12-05 2013-12-05 Image processing apparatus and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2013251672A JP6303458B2 (en) 2013-12-05 2013-12-05 Image processing apparatus and image processing method

Publications (2)

Publication Number Publication Date
JP2015109573A JP2015109573A (en) 2015-06-11
JP6303458B2 true JP6303458B2 (en) 2018-04-04

Family

ID=53439632

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2013251672A Active JP6303458B2 (en) 2013-12-05 2013-12-05 Image processing apparatus and image processing method

Country Status (1)

Country Link
JP (1) JP6303458B2 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3725255B2 (en) * 1996-08-01 2005-12-07 株式会社リコー Digital image processing device
JP3834484B2 (en) * 2001-05-25 2006-10-18 シャープ株式会社 Image processing method, image processing apparatus, image forming apparatus, and computer-readable recording medium
JP4189467B2 (en) * 2004-05-27 2008-12-03 コニカミノルタビジネステクノロジーズ株式会社 Image processing device
JP4988624B2 (en) * 2008-02-22 2012-08-01 株式会社リコー Image processing apparatus, image processing method, and recording medium

Also Published As

Publication number Publication date
JP2015109573A (en) 2015-06-11

Similar Documents

Publication Publication Date Title
US7586650B2 (en) Image processing apparatus and image processing method
CA2052011C (en) Edge enhancement method and apparatus for dot matrix devices
US8619328B2 (en) Image processing apparatus and image processing method
US7760934B2 (en) Color to grayscale conversion method and apparatus utilizing a high pass filtered chrominance component
JP4471062B2 (en) Adaptive image enhancement filter and method of generating enhanced image data
JP4358980B2 (en) How to change the darkness level of a digital image
US7227990B2 (en) Color image processing device and color image processing method
US7599099B2 (en) Image processing apparatus and image processing method
US5396584A (en) Multi-bit image edge enhancement method and apparatus
US9349161B2 (en) Image processing apparatus and image processing method with edge enhancement
TW525374B (en) System and method for enhancing document images
JP4890974B2 (en) Image processing apparatus and image processing method
JP4262151B2 (en) Image processing method, image processing apparatus, computer program, and storage medium
US6275304B1 (en) Automated enhancement of print quality based on feature size, shape, orientation, and color
JP5322889B2 (en) Image forming apparatus
EP1617646A2 (en) Image processing apparatus and method
US7746505B2 (en) Image quality improving apparatus and method using detected edges
JP2008167461A (en) Edge-enhancement processing apparatus, output device, edge-enhancement processing method and computer readable recording medium
JP2008011267A (en) Image processor, image processing method, image processing program, and memory medium
US9241090B2 (en) Image processing device, image correcting method and program
JP2004172710A (en) Image processing apparatus
EP0881822B1 (en) Luminance-based color resolution enhancement
US8416460B2 (en) Image processing apparatus and control method of image forming apparatus with exclusive or relative control over the trapping and edge smoothing processing
JP4684959B2 (en) Image processing apparatus, image processing method, and program
JP5895734B2 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20160926

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20170616

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20170718

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20170829

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20180206

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20180219

R150 Certificate of patent or registration of utility model

Ref document number: 6303458

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150