JP5760785B2 - Image processing apparatus and image processing system - Google Patents

Image processing apparatus and image processing system Download PDF

Info

Publication number
JP5760785B2
JP5760785B2 JP2011157181A JP2011157181A JP5760785B2 JP 5760785 B2 JP5760785 B2 JP 5760785B2 JP 2011157181 A JP2011157181 A JP 2011157181A JP 2011157181 A JP2011157181 A JP 2011157181A JP 5760785 B2 JP5760785 B2 JP 5760785B2
Authority
JP
Japan
Prior art keywords
image data
pixel
color
data
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2011157181A
Other languages
Japanese (ja)
Other versions
JP2013026693A (en
Inventor
聡史 中村
聡史 中村
Original Assignee
株式会社リコー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社リコー filed Critical 株式会社リコー
Priority to JP2011157181A priority Critical patent/JP5760785B2/en
Publication of JP2013026693A publication Critical patent/JP2013026693A/en
Application granted granted Critical
Publication of JP5760785B2 publication Critical patent/JP5760785B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to an image processing apparatus that reproduces a color tone of a first output result obtained by outputting original image data by a first image output means in a second output result obtained by outputting the original image data by a second image output means.

  An image output device such as a printer or a display is required to print according to a pixel value of a document. For this reason, an operation of updating the color profile of the image output device by comparing the pixel value of the document with the pixel value measured from the printed matter by the colorimeter, for example, may be performed. When updating the color profile, the image output device outputs a color chart with a known pixel value, measures the color chart with a colorimeter such as a scanner, compares the two, and compares the two based on the comparison result. A method of updating a color profile is widely used (see, for example, Patent Document 1).

The following two patterns can be considered for such work. A description will be given using a printing machine as an example.
a) A color chart specified as a pattern standard for adjusting the color tone to the standard color chart is printed by an image output device, each color patch constituting the color chart is measured by a colorimeter, and the image output device, etc. The printer profile of the image output device is updated so that the difference between the obtained colorimetric value and the expected value falls within a predetermined range.
b) As an example of a pattern for matching the color tone to the reference image output device, the color tone of the output of the proofer (a color proofing machine or a printing machine that can obtain an output similar to the proofing machine) matches the color tone of the output of the image output device. There are cases where In this case, the color chart is printed by the proofer and the image output device, respectively, and the user measures the color patches of the two printed color charts by the colorimeter. The user updates the printer profile of the proofer so that the obtained colorimetric value difference falls within a predetermined range.

  However, the conventional color profile updating method has a problem that it cannot be performed in a situation where a printed matter of a reference color chart cannot be obtained. This is because when the color tone of the output of one image output device is matched with the color tone of another image output device as described above, both need to output the same color chart. However, in reality, there are cases in which the standard image output device cannot output a color chart, or the reference image output device cannot obtain a printed matter on which the color profile is updated from the printed image on which the color chart is printed. is there.

  As an example of this case, there is a case where when a printing company receives an order for a printing job from a customer, it is required to match the color tone with the output result of the customer's printer. If color management is performed appropriately on the customer side, the printer can respond to the customer's request even under such conditions. However, there are many cases where customers are not familiar with color management. Examples of proper color management include the regular calibration of image output equipment and the standardization of image data colors such as ICC (International Color Consortium) profiles. The case where it is managed based on is mentioned.

  In situations where color charts are not available and color management is not properly performed on the customer side, the printer must perform color matching manually. Since this operation is performed by trial and error, it requires a lot of time and skill because it depends on the experience and intuition of the operator. Furthermore, since the result of color matching is printed and confirmed sequentially, a large amount of paper is wasted and the printer suffers a loss (paper that is discarded is called “scrap paper”).

  In view of the above problems, an object of the present invention is to provide an image processing apparatus capable of calibrating the color misalignment between two printed materials without using a color chart.

  The present invention provides an image processing for reproducing the color tone of the first output result obtained by outputting the original image data by the first image output means in the second output result obtained by outputting the original image data by the second image output means. A first geometric transformation parameter for aligning the first output image data obtained by reading the first output result and the position of the original image data, and the reading device A geometric transformation parameter estimating means for estimating a second geometric transformation parameter for aligning the position of the original image data with the second output image data read from the output result, and the first and second geometric transformation parameters. Difference detection means for generating difference image data by obtaining a difference between a pixel value of a pixel or a pixel group of the first output image data and the second output image data to be associated with each other A correction processing unit that creates corrected image data in which pixel values of the difference image data are converted with reference to a conversion table that associates the pixel values of the difference image data with the converted pixel values; and the document image data And image synthesizing means for synthesizing the original image data and the corrected image data by arithmetically processing pixel values of corresponding pixels of the corrected image data.

  It is possible to provide an image processing apparatus that can calibrate the color misalignment between two printed materials without using a color chart.

It is an example of the figure explaining the characteristic part of the color tone correction | amendment of a present Example. It is an example of a block diagram of a color tone conversion system. It is an example of the hardware block diagram of a color tone conversion system. It is an example of the hardware block diagram of a computer. FIG. 2 is an example of a hardware configuration diagram of an MFP when the color tone conversion system is realized by a single MFP. 2 is an example of a functional block diagram of a color tone conversion system or MFP. FIG. FIG. 5 is an example of a flowchart illustrating a procedure for color tone correction by a color tone conversion system or MFP. It is a figure which shows an example of geometric transformation. It is an example of a block diagram of a color tone conversion system. FIG. 9 is an example of a diagram illustrating a characteristic part of color tone correction (Example 2); FIG. 10 is an example of a functional block diagram of a color tone conversion system or MFP (second embodiment). FIG. 5 is an example of a flowchart illustrating a procedure for color tone correction by a color tone conversion system or MFP. It is an example of the flowchart figure which shows the procedure in which the color reproduction characteristic estimation part estimates the color reproduction characteristic of a user printer. It is an example of the figure explaining the case where the number of divisions and a division width are determined beforehand. It is an example of the figure explaining the case which determines the division number and division width by a histogram. FIG. 3 is an example of a diagram illustrating a relationship between user image data and document image data. FIG. 10 is an example of a functional block diagram of a color tone conversion system or MFP (third embodiment). FIG. 5 is an example of a flowchart illustrating a procedure for color tone correction by a color tone conversion system or MFP. It is an example of the figure explaining weight data. It is an example of the figure explaining color difference distribution.

  DESCRIPTION OF EMBODIMENTS Hereinafter, embodiments for carrying out the present invention will be described with reference to the drawings.

FIG. 1 is an example of a diagram illustrating a characteristic part of color tone correction according to the present embodiment.
The left figure shows reference image data obtained by outputting original image data from a reference printer which is a target for color tone matching (actually, image data read by a scanner or the like).
The right figure shows user image data output by the user printer whose color tone should be the same as that of the reference printer (original image data read by a scanner or the like).

  The color tone conversion system creates difference image data between the reference image data and the user image data. Here, the pixel value of the user image data is simply subtracted from the pixel value of the reference image data. Since the difference image data is the difference between the reference image data and the user image data, the user image data may be printed so that the difference is eliminated.

  Therefore, the color tone conversion system reflects the difference image data in the document image data. However, simply adding difference image data to document image data does not make the user image data and reference image data equivalent when the user printer outputs document image data. In general, the color tone reproduction characteristics (what color the input color is output as) are not linear, so the linear reflection method of adding difference image data to the original image data is compatible with the color tone reproduction characteristics. It is because it does not.

  For this reason, the color tone conversion system of the present embodiment corrects the difference image data for each pixel or each pixel group. Therefore, an LUT (correction parameter described later) is prepared in advance.

  For example, when the pixel value of a certain pixel in the difference image data is c, a value associated with c is set as a corrected pixel value c ′.

  If it is assumed that the tone has been converted to the corrected pixel value c ′, it can be expected to be equal to the difference between the reference image data and the user image data. Therefore, the color tone conversion system adds the corrected differential image data (hereinafter referred to as corrected image data) obtained by performing the above correction for each pixel or pixel group of the differential image data to the document image data. In this embodiment, the addition of the pixel value to the original image data of the corrected image data is called image data synthesis. The user image data from which the user printer has output the synthesized original image data can be expected to be equivalent to the reference image data.

FIG. 2 shows an example of a configuration diagram of the color tone conversion system 610. Equipment and image data are defined as follows.
First image output device: Printer (referred to as “reference printer”)
Second image output device: Printer (referred to as “user printer”)
-Image reading device: scanner The terms used in the following are defined as follows.
Reference printer: a printer that corresponds to the first image output device and is a target to be adjusted in color tone. User printer: a printer that corresponds to the second image output device and wants to adjust the color tone to the reference printer 400. Scanner: image reading. Corresponding to the apparatus: Document image data: Image data used when the printer outputs a printed matter Reference print: Printed document image data output from the reference printer 400, which is a target for color matching, Reference image data: Reference print Image data obtained by reading by image reading apparatus / user printed matter: original image data output by user printer 200, printed matter desired to match color tone with reference printed matter, user image data: image obtained by reading user printed matter by image reading device Data In the present embodiment, the reference printer and the user print are used, and the user printer 2 By performing color tone conversion on the original image data given to 00, a user printed material having a color tone equivalent to the color tone of the reference printed material is obtained.

  The apparatus that performs color tone conversion may be the second image output device or the scanner 300, or may be a separate computer 100. In the present embodiment, description will be made assuming that the computer 100 combines the original image data and the corrected image data.

  A color tone conversion system 610 illustrated in FIG. 2 includes a computer 100, a user printer 200, and a scanner 300 connected via a network 500. Instead of the user printer 200, an offset printer, a gravure printer, or the like may be used, and a spectrocolorimeter or a camera may be used instead of the scanner 300. Since it is assumed that the reference printer 400 does not exist on the user side of the color tone conversion system 610, it is not connected to the network, but may be connected. The user of the color tone conversion system 610 can acquire whether or not the reference print product from which the reference printer 400 has output the reference image data has already been acquired.

  The network is an in-house LAN, a wide area LAN (WAN), an IP-VNP (Virtual Private Network), the Internet VPN, the Internet, or the like. It is sufficient that the computer 100, the user printer 200, and the scanner 300 can communicate with each other, such as a network in which these are combined. A part of the telephone line may be included, and a wired connection or a wireless connection may be used.

  Note that the reference printer 400 and the user printer 200 do not have to be different devices, for example, when the same color is used to match the past and current color tones. The reference printer 400 and the user printer 200 may have one or more of a scanner function, a FAX function, and a copy function as long as they have a printer function. Similarly, the scanner 300 may have one or more of a printer function, a FAX function, and a copy function as long as it has a scanner function. An apparatus having a plurality of functions may be referred to as an MFP (Multifunction Peripheral).

  Further, the computer 100 generates difference image data from the reference image data obtained by reading the reference print product by the scanner 300 and the user image data obtained by the scanner 300 reading the user print product output by the user printer 200. Is corrected, and then combined with the original image data. The document image data may be stored in advance by the user printer 200 or may be acquired from the reference printer 400. The computer 100, the user printer 200, and the scanner 300 can be mounted on one MFP.

  FIG. 3 shows an example of a hardware configuration diagram of the color tone conversion system 610. The color tone conversion system 610 includes an image input unit 601, an image output unit 602, an image storage unit 603, an image analysis unit 604, a parameter storage unit 605, and an image processing unit 606.

  The image input unit 601 inputs an image output by an image output device, and corresponds to the scanner 300 in FIG. The image storage unit 603 stores image data received by the image input unit 601 and corresponds to the computer 100 in FIG. The image analysis unit 604 generates corrected image data from the reference image data, user image data, and document image data, and synthesizes the corrected image data and document image data to generate corrected document image data. In FIG. 2, the computer 100 corresponds. The parameter storage unit 605 stores difference image data, corrected image data, corrected document image data, LUT, and the like, and corresponds to the computer 100 in FIG. The image processing unit 606 performs image processing on the obtained corrected document image data, and corresponds to the user printer 200 in FIG. The image output unit 602 outputs corrected document image data, and corresponds to the user printer 200 in FIG.

  FIG. 4 shows an example of a hardware configuration diagram of the computer 100. The computer 100 includes a CPU 101, a RAM 102, a ROM 103, a storage medium mounting unit 104, a communication device 105, an input device 106, a drawing control unit 107, and an HDD 108 that are mutually connected by a bus. The CPU 101 reads out an OS (Operating System) and a program from the HDD 108 and executes them to provide various functions and generate corrected document image data.

  The RAM 102 is a working memory (main storage memory) that temporarily stores data necessary for the CPU 101 to execute the program. The ROM 103 stores a program (static input data) for starting up a BIOS (Basic Input Output System) and the OS. It is remembered.

  A storage medium 110 can be attached to and detached from the storage medium mounting unit 104, and a program recorded in the storage medium 110 is read and stored in the HDD 108. Further, the storage medium mounting unit 104 can also write data stored in the HDD 108 into the storage medium 110. The storage medium 110 is, for example, a USD memory or an SD card.

  The input device 106 is a keyboard, a mouse, a trackball, or the like, and accepts various operation instructions from the user to the computer 100.

  The HDD 108 may be a nonvolatile memory such as an SSD and stores various data such as an OS, a program, and image data.

  The communication device 105 is a NIC (Network Interface Card) for connecting to a network 301 such as the Internet, and is, for example, an Ethernet (registered trademark) card.

  The drawing control unit 107 interprets a drawing command written in the graphic memory by the CPU 101 executing the program 111, generates a screen, and draws it on the display 109.

  FIG. 5 shows an example of a hardware configuration diagram of the MFP 700 when the color tone conversion system 610 is realized by a single MFP 700. The MFP 700 includes a controller 30, an operation unit 31, a fax control unit 32, a plotter 33, a scanner 34, and other hardware resources 35. The controller 30 includes a CPU 11, a MEM-P 12, an NB (North Bridge) 13, an ASIC 16, a MEM-C 14, an HDD 15 (Hard Disk Drive), and a peripheral device 17 connected to the NB 13 via a PCI bus.

  In the controller 30, the ASIC 16 is connected to the MEM-C 14, the HDD 15, and the NB 13, and the NB 13 is connected to the CPU 11 and the MEM-P 12. The NB 13 is one of CPU chip sets, and is a bridge for connecting the CPU 11, the MEM-P 12, the ASIC 16, and peripheral devices.

  The ASIC 16 is an IC for image processing applications and performs various image processing. The ASIC 16 also serves as a bridge for connecting the AGP, HDD 15 and MEM-C 14 respectively. The CPU 11 performs overall control of the MFP 700 and activates and executes various applications installed in the MFP 700.

  The MEM-P 12 is a system memory used by the MFP 700 system, and the MEM-C 14 is a local memory used as a buffer for image data during image processing.

  The HDD 15 is a large-capacity storage, and an SSD (Solid State Drive) or the like may be used. The HDD 15 stores an OS, various applications, font data, and the like. The HDD 15 stores a program 23 for generating corrected document image data.

  The peripheral devices 17 are a serial bus, NIC, USB host, IEEE802.11a / b / g / n, IEEE1394, and memory card I / F. For example, a Centronics cable is connected to the serial bus. The NIC controls communication via the network. A device is connected to the USB host via a USB cable. IEEE802.11a / b / g / n is an interface for a wireless LAN according to these standards, and controls communication by the wireless LAN. IEEE 1394 is an interface that controls high-speed serial communication. Various memory cards are mounted on the memory card I / F, and data is read and written. The memory card is, for example, an SD card, a multimedia card, an xD card, or the like.

  The operation unit 31 includes a hardware keyboard and display means such as a liquid crystal. The operation unit 31 receives an input operation from the user and displays various screens for the user. The operation unit 31 is equipped with a touch panel and can accept user operations from displayed soft keys.

  The fax control unit 32 is connected to a public communication network via an NCU (Network Control Unit), and performs facsimile transmission / reception according to a communication procedure (communication protocol) compatible with, for example, a G3 or G4 standard facsimile. The fax control unit 32 performs signal processing such as data compression and modulation on the image data and transmits the image data, and decompresses the image data received from the other party and corrects the error to restore the image data.

  The plotter 33 is, for example, a black-and-white plotter or a color plotter using an electrophotographic method, and forms an image for each page based on print target data or image data read by the scanner 34, and transfers the image to a sheet. For example, a toner image formed on a photosensitive drum or the like is transferred onto a sheet using an electrophotographic process using a laser beam, and is fixed by a fixing device with heat and pressure and output. Moreover, you may print in the form which apply | coats an ink droplet.

  The scanner 34 optically scans the document placed on the contact glass, A / D converts the reflected light, performs known image processing, converts it into digital data of a predetermined resolution, and generates image data. .

  In the MFP 700 of FIG. 5, the image input unit 601 of FIG. 3 corresponds to the scanner 34, the image output unit 602 corresponds to the plotter 33, the image storage unit 603 corresponds to the HDD 15, and the image analysis unit 604 corresponds to the CPU 11. The parameter storage unit 605 corresponds to the HDD 15, and the image processing unit 606 corresponds to the ASIC 16.

  FIG. 6 is an example of a functional block diagram of the color tone conversion system 610 or the MFP 700 of this embodiment. The color tone conversion system 610 of this embodiment includes an image reading unit 41, a geometric conversion parameter estimation unit 42, a difference detection unit 61, a correction processing unit 62, and an image composition unit 63.

  The image reading unit 41 reads the reference printed material and the user printed material, which are output results of the document image data, and generates the reference image data and the user image data.

  The geometric conversion parameter estimation unit 42 estimates the geometric conversion parameters of the document image data and the reference image data, and the document image data and the user image data.

  The difference detection unit 61 detects a difference between the reference image data and the user image data, and generates difference image data.

  The correction processing unit 62 performs correction processing on the difference image data using a predetermined correction parameter, and generates corrected image data. The correction processing unit 62 stores correction parameters. The correction parameters are actually stored in the HDDs 108 and 15.

  The image combining unit 63 combines the document image data and the corrected image data to generate corrected document image data.

[Operation procedure]
FIG. 7 is an example of a flowchart illustrating a procedure in which the color tone conversion system 610 or the MFP 700 performs color tone correction.

  The image reading unit 41 reads the reference printed material and generates reference image data (S110).

  The user printer 200 prints the original image data and outputs a user print (S120).

  The image reading unit 41 reads the user printed material and generates user image data (S130). The reference print and the user print may be read by the same scanner, or the two prints may be read by separate scanners under the condition that the color profile of the scanner can be used to convert to a device-independent color space. .

  Next, the geometric transformation parameter estimation unit 42 aligns the reference image data and the user image data with the document image data (S140). That is, using the original image data as a reference, geometric conversion parameters for the reference image data and user image data are obtained, and alignment is performed using the geometric conversion parameters. Examples of geometric transformation parameters include displacement, rotation angle, and scaling factor.

A known technique may be used to estimate the geometric parameter. Examples thereof include a method using a marker, a pattern matching method not using a marker, a phase-only correlation method, and the like.
a) Marker method A marker called `` Register Mark '' is placed at the four corners and the center of each side of the original image data and output, and when the reference image data and user print are read, the misalignment of this register mark marker is detected. Using this method, the amount of displacement, rotation angle, and variable magnification are obtained.

  FIG. 8A is a diagram illustrating an example of a registration mark marker. Image data and four to six registration marks are formed on one recording sheet. Assuming that the relative positions of the registration marks and the image data are the same between the document image data and the user image data, the geometric conversion parameters can be obtained by comparing the positional deviations of the two registration marks at the same positions. For example, since the approximate position of the registration mark from the paper edge is known, the registration mark position can be detected by performing registration mark detection processing within a predetermined range from the paper edge.

FIG. 8B is an example of a diagram for explaining the displacement of the registration mark marker position. Pn (n is an integer equal to or greater than 1) indicates the position of the feature point of the registration mark marker in the document image data, and qn indicates the position of the feature point of the registration mark marker in the reference image data. If there is no misalignment, the positions of the points of P1 and q1, P2 and q2, P3 and q3,... Should match, and the geometric transformation parameters can be obtained by obtaining the correspondence between the points by a known method. Here, it is known to match two point patterns by performing, for example, affine transformation on one of the two images. Therefore, in order to obtain the geometric transformation parameter, it is only necessary to find an optimal affine parameter that approximates each position of the two point patterns. For example, an affine parameter evaluation function for affine transformation is defined as P1 to P6, and the affine parameter when the evaluation function is minimized is used as the geometric transformation parameter.
b) Method using pattern matching method An example of a method for estimating only the displacement is a template matching method. In the template matching method, one image is used as a template, the degree of coincidence with the other image is obtained while gradually shifting the position, and the position with the highest degree of coincidence is detected. When the geometric transformation cannot be limited to only displacement, it is necessary to use in combination with a method for estimating the rotation angle (such as Hough transform) or a method for estimating the amount of magnification (such as multi-scale analysis).

In the block matching method using template matching, one image is divided into blocks, and the displacement amount can be obtained by detecting the position having the highest degree of coincidence with the other image for each block. In the block matching method, it is also possible to estimate the rotation angle and magnification from the displacement amount for each block.
c) Method using phase-only correlation method Examples of methods for obtaining displacement, rotation angle, and scaling factor with high accuracy include phase-only correlation (POC) and rotation-invariant phase-only correlation (RIPOC, Rotation Invariant). Phase Only Correlation). The phase only correlation method uses a phase image obtained by subjecting an image to a discrete Fourier transform, and detects a position where the correlation between two phase images obtained from two images to be compared is highest, This is a method for obtaining a displacement amount. The rotation-invariant phase-only correlation method is such that the rotation angle and the scaling factor can be detected as a displacement amount on the converted phase image by logarithmic polar coordinate conversion of the phase image.

  When the geometric image conversion parameters of the original image data and the reference image data, and the original image data and the user image data are obtained, the reference image data and the user image data are also aligned.

  Further, by obtaining the geometric conversion parameters of the document image data and the reference image data and by obtaining the geometric conversion parameters of the reference image data and the user image data, the document image data and the user image data are also aligned.

  Similarly, the original image data and the user image data are also aligned by obtaining the geometric conversion parameters of the original image data and the user image data and obtaining the geometric conversion parameters of the user image data and the reference image data. That is, if two geometric transformation parameters are obtained, any two image data out of the three image data are aligned.

  When the geometric transformation parameters are obtained as described above, the geometric transformation parameter estimation unit 42 performs geometric transformation on the reference image data (or user image data). In the case where the pixels before and after conversion do not correspond one-to-one due to sub-pixel precision shift, some rotation, or scaling with a real value at the time of conversion, the pixel value may be derived appropriately using a pixel interpolation method. Examples of pixel interpolation methods include bilinear methods and bicubic methods.

  Note that geometric conversion is not essential, and when the pixels at the same position in the original image data and the reference image data (or user image data) are acquired in the next step, coordinate conversion is performed using the geometric conversion parameters. You may substitute by judging whether it is a position. In other words, even if the coordinate system based on the origin of each image holds different coordinate values, the pixels with the same coordinate value as a result of geometric transformation are regarded as “pixels at the same position”. Become.

  There are cases where margins exist around the printed material obtained by outputting the document image data. In such a case, since the height and width of the margin part are included in the displacement amount of the geometric transformation, the margin part is not referred to, but a necessary area is cut out in the output image data so as to exclude the margin part, You may make the position of the origin in each image correspond.

  Returning to FIG. 7, the difference detection unit 61 evaluates the user print (S150). Then, it is determined whether the user print is valid (S160).

  If the quality of the user printed material is sufficient (Yes in S160), the process ends. If not (No in S160), the process proceeds to the next step S170.

As an example of a method for evaluating the quality of a user print, there is a method using a color difference from a reference print. Other examples include a method using a hue difference and a method using an absolute value of a difference between each color component. The quality evaluation may be performed visually.
a) Evaluation method using color difference Color difference is the distance between two colors in L * a * b * color space or L * u * v * color space. Since this embodiment uses a printer as an image output device, an explanation will be given using the L * a * b * color space.
The color difference ΔE * ab in the L * a * b * color space is defined by the following equation.

Here, (ΔL * , Δa * , Δb * ) is a chromaticity difference between two colors in the L * a * b * color space.
An example of the procedure for obtaining the color difference between the reference print and the user print is shown below.
(1) A reference print is read by the scanner 300 to obtain reference image data. (2) A user print is read by the same scanner 300 as in (1) to obtain user image data. (3) The reference image data and user image data are obtained from the scanner 300. Convert to a device-independent color space (XYZ color space, etc.) using a color profile (4) L * a * b * color space between reference image data and user image data converted to a device-independent color space (5) The color difference for each pixel is calculated by the above formula. The reference print product and the user print product are read by the same scanner 300. However, the condition that can be converted into a device-independent color space using the color profile of the scanner 300 Originally, two printed materials may be read by separate scanners 300.

  When only one scanner 300 is used, it is not essential to convert to a device-independent color space using a color profile. In cases where color difference values are evaluated quantitatively, absolute values are important, so conversion to a device-independent color space is necessary.However, in cases where color difference values are evaluated qualitatively, they are relative. Since it is only necessary to grasp the tendency, conversion to a device-independent color space may be omitted.

  Once the color difference for each pixel is found, this information can be statistically analyzed to quantitatively evaluate the quality of the user print. Examples of the analysis method include an average value, maximum value, value distribution, and variance of color differences.

The judgment of whether the quality is sufficient is
-Whether the average color difference is within a predetermined value,
-Whether the maximum color difference is within a predetermined value,
・ Whether the variance is within the prescribed value,
It can be judged by such criteria. When evaluating the quality of user prints, it is desirable to remove the outline portion of the content of the image data. this is,
・ It is difficult to perfectly align the contours in the alignment required for later processing.
・ The reproducibility of the contour varies depending on the printer (color, sharpness, etc.)
This is because there is a possibility that a large color difference appears in the contour portion.

  Since the area of the contour portion is only a part of the area of the entire printed matter, the influence on the overall color tone evaluation by visual inspection is limited. On the other hand, in quantitative evaluation, there is a concern that the large color difference of the contour portion described above may be an outlier and reduce the reliability of the evaluation result. it can.

  Examples of the method for detecting the contour portion include a method using binarization and a method using edge detection. As an example of a method using binarization, there is a method in which image data is binarized into black and white with a predetermined threshold, and a portion where a white region and a black region are adjacent is determined as a contour portion. As an example of a method using edge detection, there is a method in which an edge image is created from image data by using a Sobel method or the like, and this is binarized with a predetermined threshold value, and pixels above the threshold value are determined as contour portions.

  There is also a method for alleviating the above problem without removing the contour portion. For example, the image data is smoothed to smooth the contour portion, and the color difference appearing at the contour portion is reduced. For smoothing, a conventional technique such as an averaging filter or a low-pass filter may be used.

b) Evaluation method using hue difference
The hue difference ΔH * ab in the L * a * b * color space is defined by the following equation.

Here, ΔE * ab is a color difference, (ΔL * , Δa * , Δb * ) is a chromaticity difference between two colors, and ΔC * ab is a chroma difference. Chroma C * ab is defined by the following equation.

The procedure for obtaining the hue difference between the reference print and the user print is the same as the procedure for obtaining the color difference, but the hue difference is calculated instead of the color difference. The same applies to statistical analysis methods and quality determination methods.

c) Evaluation method using absolute value of difference of each color component In this method, the absolute value of the difference of each color component between the reference print and the user print is taken and evaluated in a predetermined color space. Taking the RGB color space as an example, the difference between the absolute values of the R component values, the difference between the absolute values of the G component values, and the difference between the absolute values of the B component values are used.
An example of a procedure for obtaining the absolute value of the difference between the color components of the reference print and the user print is shown below.

(1) A reference print is read by the scanner 300 to obtain reference image data. (2) A user print is read by the same scanner 300 as in (1) to obtain user image data. (3) The reference image data and the user image data are obtained from the scanner 300. Conversion to a device-independent color space (XYZ color space or the like) using a color profile (4) In the converted color space, the absolute value of the difference between each color component value is obtained for each pixel.
As in the case of the color difference, conversion to a device-independent color space using the color profile of the scanner 300 is not essential, and the absolute value of the direct difference can be obtained directly in the device-dependent color space of the scanner 300. Good. The statistical analysis method and quality determination method are the same as in the case of color difference.

  Next, the difference detection unit 61 generates difference image data (S170). That is, the difference between the pixel values of the user image data with respect to the reference image data is obtained over the entire image data, and difference image data is generated. If the reference image data and the user image data are aligned, the difference between the pixel values of the pixels existing at the same coordinates can be obtained, and if only the geometric transformation parameters are obtained, the geometric transformation can be performed. What is necessary is just to take the difference of the pixel value of the pixel to which a coordinate respond | corresponds. Note that the difference in pixel values may be obtained in units of pixels, or may be substituted by obtaining the difference in average pixel value in each area composed of a plurality of pixels. Also, the difference in pixel value is recorded including the sign.

  Next, the correction processing unit 62 corrects the difference image data (S180). That is, the correction image data is obtained by performing correction processing on the difference image data using a predetermined correction parameter. Examples of methods for easily making correction parameters in advance include a method in which a user designates an arbitrary value and a method in which the correction parameter is determined in advance using an image having a sufficient number of gradations. The latter method is basically the same as the case where the correction parameter is dynamically determined, and will be described in the second embodiment. The reason for using an image having a sufficient number of gradations is to increase the accuracy of the estimated correction parameter. Since the color chart generally has a sufficient number of gradations, it matches this purpose and may be used.

In this embodiment, it is assumed that the correction parameter is predetermined in the form of γ correction or a look-up table. Therefore, γ correction and look-up table conversion are directly used as correction processing. That is, the correction parameter is a gamma value for gamma correction (corresponding to γ when output = input γ and the pixel value of the difference image data), and the table itself (pixel value of the difference image data) for LUT. Is associated with the value of the corrected differential image data).

  Considering that the tone reproduction characteristics are not linear, if a LUT or γ that associates the pixel values of the user image data with the pixel values of the difference image data is prepared, even if the tone reproduction characteristics are nonlinear. Yes.

  Next, the image composition unit 63 composes the document image data and the corrected image data (S190). That is, the original image data and the corrected image data obtained in the previous step are synthesized to obtain corrected original image data. Since the corrected image data is signed, the document image data and the corrected image data may be added as they are or may be added with weights superimposed. An example in which weights are superimposed and added will be described in a third embodiment.

  When the number of color tone conversions reaches a predetermined number (Yes in S200), the color tone conversion system 610 ends the process of FIG.

  If the number of color tone conversions has not reached the predetermined number and continues (No in S200), the corrected original image data is printed from the user printer 200 as an input (S110), and the process is continued. All the original image data used in the next loop has been corrected.

  In the flowchart of FIG. 7, two end condition determinations are set. However, not all of them need to be set. Although it may be omitted as appropriate, it is desirable that at least one of them is set.

In this embodiment, the color space used at the time of reading by the scanner is used as it is. However, since this color space is a device-dependent color space, it is device-independent using the color profile of the scanner. It is desirable to convert to a color space. Examples of device-independent color spaces include device-independent RGB color space and XYZ color space. Furthermore, it is more preferable to convert to a uniform color space such as L * a * b * color space.

When processing is performed after converting the output image data to the L * a * b * color space, it is also necessary to convert the original image data to the L * a * b * color space. In addition, the tone conversion is performed in the L * a * b * color space. However, it is necessary to return to the original color space after color conversion.

  In this embodiment, a printer is used as the first image output device and the second image output device, and a scanner is used as the image reading unit. However, instead of the printer, an offset printing machine, a gravure printing machine, or the like is used. Instead of, a spectrocolorimeter or a camera may be used.

Another example of the combination of the image output device and the image reading unit includes the following system configuration example.
FIG. 9 shows an example of a configuration diagram of the color tone conversion system 610. This color tone conversion system 610 includes a computer 100, a projector 800, and a digital camera 900 connected via a network. In this case, the description of the present embodiment may be read as follows.
a) Reference printer → Reference display b) Reference printed matter → Reference display screen c) User printer → User projector d) User printed matter → User display screen In this embodiment, a printer is used as the image output device. * a * b * color space was used, but when a display or projector is adopted as the image output device, L * u * v * color space is used as the uniform color space. Also in the following embodiments, the configuration of the color tone conversion system 610 may be any of FIG. 2, FIG. 5, or FIG.

  As described above, the color tone conversion system 610 of the present embodiment generates difference image data between user image data and reference image data, corrects it, and synthesizes it with original image data without using a color chart. It is possible to calibrate the color misregistration between the two printed materials.

  In this embodiment, a color tone conversion system 610 that dynamically creates correction parameters for correcting difference image data will be described.

  FIG. 10 is an example of a diagram illustrating a characteristic portion of the color tone correction system 610 of the present embodiment. 10, the description of the same part as in FIG. 1 is omitted.

In this embodiment, the LUT and γ correction table are not prepared in advance, but the correction parameter is obtained dynamically. For example, 1 / P (a) in FIG. 10 is the correction parameter. That is, when the pixel value of a certain pixel of the difference image data is c and the pixel value of the corresponding pixel of the user image data is a, the correction is performed as follows. Since the correction amount increases when the correction parameter is large, the correction parameter can be regarded as the correction amount.
-Pixel value after correction c ′ = c × 1 / P (a)
Here, assuming that the pixel value is x, p (x) is a functional expression representing a non-linear color tone conversion characteristic (conversion characteristic for converting document image data into user image data), for example, a quadratic curve, an exponential function, an inverse function, and the like. There are functions. A method for obtaining P (x) will be described later. Therefore, the corrected pixel value c ′ multiplied by the reciprocal of P (x) is corrected to a value (original image data value) before color tone conversion.

  If it is assumed that the tone has been converted to the corrected pixel value c ′, it can be expected to be equal to the difference between the reference image data and the user image data.

  Therefore, the color tone conversion system 610 combines the corrected image data obtained by performing the above correction for each pixel or pixel group of the difference image data with the document image data. User image data from which the user printer 200 has output corrected document image data can be expected to be equivalent to the reference image data.

  FIG. 11 is an example of a functional block diagram of the color tone conversion system 610 or the MFP 700 according to this embodiment. In FIG. 11, the description of the same parts as those in FIG. 6 is omitted. The color tone conversion system 610 or the MFP 700 of this embodiment includes a pixel value association unit 43, a color component value association unit 44, a color tone reproduction characteristic estimation unit 45, and a correction parameter determination unit 64.

  The pixel value association unit 43 uses the geometric transformation parameters to detect pixels of the reference image data at positions corresponding to the pixels of the document image data, and creates pixel value association data by associating these pixel values. To do. Similarly, pixels of user image data at positions corresponding to the pixels of the document image data are detected using the geometric conversion parameters, and pixel value association data is created by associating these pixel values.

  The color component value associating unit 44 obtains the value of each color component of the document image data and the value of each color component of the reference image data from the pixel value association data, and the value of each color component of the document image data and the user image. A value corresponding to each color component of the data is obtained, and the color component value association data is created by associating these color component values.

  The color tone reproduction characteristic estimation unit 45 estimates the color tone reproduction characteristic data using the color component value association data.

  The correction parameter determination unit 64 determines a correction parameter using the color tone reproduction characteristic data.

  FIG. 12 is an example of a flowchart illustrating a procedure in which the color tone conversion system 610 or the MFP 700 performs color tone correction. In FIG. 12, the processing of the same steps as in FIG.

  The image reading unit 41 reads the reference printed material and generates reference image data (S110).

  The user printer 200 prints the original image data and outputs a user print (S120).

  The image reading unit 41 reads the user printed material and generates user image data (S130).

  Next, the geometric transformation parameter estimation unit 42 aligns the reference image data and the user image data with the document image data (S140).

  Next, the color component value association unit 44 evaluates the user print (S150). Then, it is determined whether or not the user print section is valid (S160).

  Next, through several processes, the color tone reproduction characteristic estimation unit 45 estimates the color tone reproduction characteristic of the user printer 200 (S160).

The estimation of the color tone reproduction characteristic will be described in detail with reference to the flowchart of FIG.
FIG. 13 is an example of a flowchart illustrating a procedure in which the color tone reproduction characteristic estimation unit 45 estimates the color tone reproduction characteristic of the user printer 200.

  First, the pixel value association unit 43 associates pixel values at the same position in the document image data and the user image data (S1621). That is, when the alignment between the document image data and the user image data is completed, the pixel values of the corresponding pixels in the two image data are acquired, and these are associated to create pixel value association data. Note that, when performing alignment by geometrically converting image data, the “corresponding pixels” are “pixels at the same position”. On the other hand, when the image data is not geometrically converted, the position having the same coordinate value by the coordinate conversion is regarded as “the same position”, and the pixel existing at that position is regarded as the “corresponding pixel”.

  Examples of a method for recording pixel values in association with each other include a method for recording in a list format and a method for recording in a matrix format. Description will be made assuming that both the document image data and the user image data are RGB images and each color component has 256 gradations.

a) Method of recording in list format Recording of color component values in a list is performed according to the following procedure.

a-1) Prepare 3 lists
a-2) Select the coordinates with the original image data
a-3) The R component value of the pixel of the original image data selected in a-2 and the R component value of the corresponding pixel of the user image data are associated with each other and added to the R component list.
a-4) Similarly, the G component value is added to the G component list, and the B component value is added to the B component list.
a-5) This is repeated for all coordinates of the document image data. These lists may be rearranged in ascending or descending order as necessary.

b) Method of recording in matrix format Voting for a matrix of correspondence of color component values is performed according to the following procedure. Here, the value of the document image data is taken on the vertical axis, and the value of the user image data is taken on the horizontal axis.

b-1) Prepare 3 matrix of 256 rows and 256 columns
b-2) Select the coordinates with the original image data
b-3) In the R component matrix, at the place where the row of the R component value of the pixel selected in a-2) of the original image data and the column of the R component value of the corresponding pixel of the user image data intersect. Vote
b-4) Similarly, the G component value correspondence is voted for the G component matrix, and the B component value correspondence is voted for the B component matrix.
b-5) Repeat this process for all coordinates of the document image. Specifically, the pixel values of the pixels at a certain coordinate in the document image data are in the order of RGB (128, 130, 132), and the corresponding pixels of the user image data. If the pixel value is (132, 130, 126), the matrix corresponding to the R component is selected from the above three matrices, and one vote is cast on the 128 rows and 132 columns. Similarly, one vote is cast on 130 rows and 130 columns of the matrix corresponding to the G component, and one vote is placed on 132 rows and 126 columns of the matrix corresponding to the B component. It should be noted that which of the document image data value and the user image data value is assigned to the vertical axis and which is assigned to the horizontal axis may be determined as necessary.

  Regardless of whether the method is a list format or a matrix format, the process is not repeated for all the coordinates of the original image data, but is limited to a specific range or at a predetermined step size in order to simplify the process. The coordinates may be moved.

  Next, the color component value association unit 44 associates the corresponding color component values in the document image data and the user image data (S1622). That is, using the pixel value association data, a color component value in the original image data is associated with which color component value in the user image data, and color component value association data is created.

A case where both the original image data and the user image data are RGB images and each color component has 256 gradations will be described as an example.
a) When the pixel value association data is in a list format When the pixel value association data is recorded as a list, the following procedure is used.

a-1) Select a value with a color component in the original image data
a-2) Get a list corresponding to the color component selected in a-1)
a-3) Get all records corresponding to the value selected in a-1) from the list obtained in a-2)
a-4) Synthesize color component values of user image data of all records obtained in a-3)
a-5) The color component value of the document image data selected in a-1) is associated with the value synthesized in a-4) and recorded as color component value association data.

a-6) Repeat this for each value of each color component If there is only one record acquired in step a-3), the color component value association unit 44 Use the value as is. If there are a plurality of records acquired in a-3), the values on the user image data side are combined and used as one value. Examples of a method of combining a plurality of values include a method that employs an average value, a method that employs a mode value, and a method that employs a median value.
b) When the pixel value association data is in a matrix format When the pixel value association data is recorded as a matrix, the following procedure is used.

b-1) Select a value with a certain color component in the original image data
b-2) Get the matrix corresponding to the color component selected in b-1)
b-3) Extract the row corresponding to the value selected in b-1 from the matrix obtained in b-2)
b-4) Combining the values of columns voted in the row extracted in b-3)
b-5) The color component value of the selected document image data is associated with the value synthesized in b-4) and recorded as color component value association data
b-6) Repeat this for each value of each color component
If a vote is found in only one column in the row extracted in b-4), that column number is adopted as the synthesized value. If the votes in b-4) exist in multiple columns, they are combined and used as one value. An example of a method of combining a plurality of values is the same as in a). However, the number of votes is used as the number of appearances of the column number.

If there is a color component value that is not used in the document image data, it is desirable to record it so that it can be determined. (The information recorded here can be used in the next step)
Next, the color tone reproduction characteristic estimation unit 45 estimates the color tone conversion characteristic (S1623). A color tone conversion characteristic is estimated using a data series of color component value association data. For the color tone conversion characteristics, the color component value association data may be used as it is, or the color component value association data may be processed and used. The purpose of processing the data is to suppress extreme value fluctuations and to improve the stability of the characteristic curve.

Examples of methods for processing and using the color component value association data include the following.
a) A weighted average of the data of interest in the data series to which the moving average is applied and the data before and after it. The reference range of the preceding and following data may be determined according to the smoothness requirement for the data series values. In order to make it smoother, it is necessary to take a wide reference range. The weight used for the weighted average may be constant for all data, or may be inversely proportional to the distance from the data of interest.

Before applying the moving average, it is necessary to rearrange the data series in ascending or descending order. In addition, if there are color component values that are not used in the original image data, the elements of the data series are missing when rearranged, but this missing element is excluded from the weighted average so that it does not affect other data. There is a need to. Whether or not an element is missing can be grasped by checking whether data is continuous or by using information on unused color component values recorded in the previous step.
b) A method of approximating a data series approximated by a straight line or curve using a linear function, a quadratic function, a spline function, an exponential function, or the like.
c) As a method of reducing the number of gradations of the data series of the color component value association data to be interpolated or approximated by a straight line or curve after reducing the number of gradations, the following example can be considered.

A) The gradation range may be divided at equal intervals, and the number of divisions and the division width for synthesizing the data of each gradation to be integrated may be determined in advance or may be determined dynamically.

A-1) Case where the number of divisions and division width are determined in advance FIG. 14A is an example of a diagram illustrating a case where the number of divisions and division width are determined in advance. FIG. 13A shows an example in which 256 gradations from 0 to 255 are divided at equal intervals by a division number 4 given in advance. By dividing into four areas of 0 to 63, 64 to 127, 128 to 191 and 192 to 255, 64 gradations are reduced to one gradation (integrated into one conversion characteristic). Become. Note that the same effect can be obtained even if the division width is given instead of the division number.

A-2) Case of dynamically determining the number of divisions and the division width As an example of a method of dynamically determining the number of divisions and the division width in the case of dividing at equal intervals, there is a method in which the number of divisions and the division width are proportional to the number of pixels. For example, a determination method is conceivable in which a value obtained by dividing the number of pixels by a predetermined number determined empirically is used as the division number.

A-1) Dividing the range of gradations at unequal intervals, and synthesizing the data of each gradation integrated thereby, the gradation to be synthesized using the number of votes in the pixel value association data corresponding to each gradation The division width is adaptively determined so that the number of votes obtained becomes a predetermined number.

  FIG. 14B is an example in which 256 gradations from 0 to 255 are divided into four at unequal intervals. It is divided into four areas from 0 to (a-1), a to (b-1), b to (c-1), and c to 255, each reduced to one gradation. It should be noted that the gradations a, b, and c are arbitrarily combined with which gradation sandwiching the gradations.

  Examples of a method of determining the number of gradations to be integrated when dividing at unequal intervals include a method of dividing the cumulative frequency of the number of pixels belonging to each gradation at equal intervals, and a frequency of the number of pixels belonging to each gradation. A method using a histogram is mentioned.

i) Method using the cumulative frequency of the number of pixels belonging to each gradation This is a method of dividing the cumulative frequency of the number of pixels belonging to each gradation at equal intervals and dividing by the gradation corresponding to the separation position.

  FIG. 14C shows an example in which 256 gradations from 0 to 255 are divided into four at unequal intervals. When the maximum value of the cumulative frequency on the vertical axis is 1.0, the division position is determined by obtaining gradations that are 0.25, 0.50, and 0.75 as the separation positions. In this example, the gradation a with an accumulated frequency of 0.25, the gradation b with 0.50, and the gradation c with 0.75 are obtained, and the above-described four regions can be determined. According to such a division method, since the number of data is the same in each section, the number of pixels to which each conversion characteristic is given can be made equal.

ii) Method using frequency distribution of the number of pixels belonging to each gradation FIG. 15A is a method of creating a histogram of the frequency of the number of pixels belonging to each gradation and dividing it by the minimum gradations e, f, and g. is there. According to such a division method, the number of pixels whose conversion characteristics are switched can be reduced.

  When the number of gradations is reduced as described above, interpolation is performed between the integrated gradations so that the original number of gradations is restored, or approximation with a straight line or curve based on the integrated data is performed. .

  FIG. 15B shows an example in which the reduced number of gradations is returned to the original number of gradations by linear approximation or curve approximation. Data in which circles are integrated, a solid line is an example of linear approximation, and a solid line is an example of curved approximation. It is desirable to select the function used for approximation according to the trend of the integrated data. By interpolating in this way or approximating with a straight line or curve, it is possible to exclude specific conversion caused by a small number of pixels from the estimated conversion characteristics.

  Returning to FIG. 12, the correction parameter determination unit 64 obtains a correction parameter from the tone reproduction characteristics of the user printer 200 (S164). The correction parameter can also be expressed as a coefficient for reflecting a certain value on the difference image data in the document image data, or can be expressed as a material for deriving the coefficient.

  The difference image data may be reflected on the document image data as it is, but it is better to apply a correction process. The reason for this will be described. The difference image data is a difference between the reference image data and the user image data. These image data are obtained by superimposing the color tone reproduction characteristics of each output image device on the document image data. Therefore, even if the difference between the pixel values generated on the original image data due to the correction is the same as the difference between the two values on the reference image data and the user image data, the meaning is different.

  For this reason, in this embodiment, it is necessary to perform a correction process using the correction parameter on the difference image data and convert it to the equivalent of the meaning on the document image data.

  FIG. 16 is an example of a diagram illustrating the relationship between user image data and document image data. P (x) in the figure is the color tone reproduction characteristic obtained in S162. That is, the pixel value a of the document image data is associated with P (a) of the user image data. In general, since the color tone reproduction characteristic is not linear, the difference Δa in the vicinity of a certain pixel value P (a) on the user image data and the difference Δb in the vicinity of another pixel value P (b) have implications on the original image data. Is different. In the example of the figure, the difference Δa is emphasized more than the difference Δb due to the color tone reproduction characteristics. For this reason, the correction processing performed on the difference image data needs to depend on the pixel values on the reference image data or user image data as the source.

As an example of a method for obtaining a correction parameter α (x) to be applied to a pixel value of a pixel in the document image data where x is a pixel value and y is a pixel value of a corresponding pixel in the user image data, the following Things.
a) Method Using Reciprocal of Inclination In this method, the inclination P ′ (x) of the tone reproduction characteristic function y = P (x) is obtained for all gradations, and the inverse of the inclination is used. FIG. 16B is an example of a diagram for schematically explaining the inclination. The inclination of the color tone reproduction characteristic function represents how a minute variation (change amount) in the vicinity of a certain pixel value in the document image data is reflected in the user image data (change amount reflected). In the correction process, correction is performed in order to reflect the difference image data in the document image data. Therefore, it is necessary to know how much the minute variation in the user image data corresponds to the variation in the document image data. This can be known by taking the reciprocal of the slope. Therefore, the correction parameter α (x) in this method can be expressed by the following equation.

b) Method using ratio with ideal straight line If a certain color of document image data can be accurately reproduced with user image data, the tone reproduction characteristic function can be expressed as a straight line y = x. FIG. 16C is an example of a diagram schematically illustrating the straight lines y = x and P (x). By calculating the ratio of the actual color tone reproduction characteristic function P (x) and this straight line, it is possible to know how much a certain pixel value of the original image data has been scaled in the user image data. . Assuming that the difference image data is also scaled in this way, the value to be reflected in the document image data can be known by multiplying the reciprocal of this ratio. Therefore, the correction parameter α (x) in this method can be expressed by the following equation.

The correction parameters obtained above may be further adjusted using a known technique such as gamma correction.

  Next, the difference detection unit 61 generates difference image data (S170). The method for obtaining the difference image data is the same as in the first embodiment.

  Next, the correction processing unit 62 corrects the difference image data using the correction parameter obtained in step S164 (S180). For example, when a pixel in the user image data has a pixel value a and the value of the difference image data corresponding to that pixel is c, the value c ′ of the corresponding pixel in the corrected difference image data is It can be expressed as an expression.

This process is performed over the entire difference image data to create corrected image data.

Next, the image composition unit 63 composes the document image data and the corrected image data (S190).
When the number of color tone conversions reaches a predetermined number (Yes in S200), the color tone conversion system 610 ends the process of FIG.

  If the number of color tone conversions has not reached the predetermined number and continues (No in S200), the corrected original image data is printed from the user printer 200 as an input (S110), and the process is continued. All the original image data used in the next loop has been corrected.

  In the above description, the correction parameter is derived based on the color tone reproduction characteristic of the user printer 200. However, the correction parameter may be derived using the color tone reproduction characteristic of the reference printer 400. An example of the method will be described.

The procedure from S110 to S162 is the same as that in FIG. After estimating the color tone reproduction characteristic P 2 (x) of the user printer 200 in S162, the color tone reproduction characteristic P 1 (x) of the reference printer 400 is estimated by the same procedure.

Subsequently, in the estimation of the correction parameters, determining the difference P d (x) of the color tone reproduction property P 1 of the reference printer 400 (x) and the tone reproduction characteristic P 2 user printer 200 (x) for example, by the following equation.

Next, after the difference detection unit 61 generates difference image data in S170, the correction parameter determination unit 64 corrects the difference image data (S180). At this time, correction processing is performed by applying an inverse function of the difference P d (x) in the color tone reproduction characteristic to the difference image data as shown in the following equation, thereby generating corrected image data.

In S190, the image composition unit 63 composes the original image data and the corrected difference image data, thereby creating original image data from which user image data having the same tone as that of the reference printer 400 can be obtained.

  In other words, the difference between the reference image data and the user image data is considered to be caused by the difference in color reproduction characteristics of each image output device, and the inverse conversion of the difference in color reproduction characteristics from the value detected as the difference in image data. This is a method of obtaining the difference on the document image data by applying.

  According to the present embodiment, in addition to the effects of the first embodiment, the differential image data can be corrected with high accuracy by dynamically generating the correction parameters of the differential image data from the color tone reproduction characteristics.

  In the present embodiment, a color tone conversion system 610 that performs weighting when combining document image data and corrected image data in Embodiment 1 will be described. By doing so, it is possible to suppress fluctuations in the pixel value of the document image data due to color tone conversion.

  FIG. 17 is an example of a functional block diagram of the color tone conversion system 610 or the MFP 700 of this embodiment. In FIG. 17, the description of the same parts as those in FIG. 6 is omitted. The color tone conversion system 610 or the MFP 700 according to the present exemplary embodiment includes a pixel association unit 51, a color difference acquisition unit 52, a color space distance acquisition unit 53, and a synthesis weight determination unit 54.

  The pixel association unit 51 detects corresponding pixels of the reference image data and the user image data using the geometric transformation parameter, and associates them to create pixel association data.

  The color difference acquisition unit 52 obtains a color difference between corresponding pixels of the reference image data and the user image data, which is obtained by using the pixel association data, and creates color difference data.

  The color space distance acquisition unit 53 determines the distance of each color of the document image data from the achromatic color and creates distance data. You may obtain | require the distance of each color of reference | standard image data or user image data from an achromatic color. This is because the distance data is considered to have the same tendency in the reference image data or the user image data.

  The combination weight determination unit 54 determines a combination weight used for combining image data from the color difference data and the distance data.

  FIG. 18 is an example of a flowchart illustrating a procedure in which the color tone conversion system 610 or the MFP 700 performs color tone correction. In FIG. 18, the description of the process of the same step as that of FIG. 7 is simplified.

  The image reading unit 41 reads the reference printed material and generates reference image data (S110).

  The user printer 200 prints the original image data and outputs a user print (S120).

  The image reading unit 41 reads the user printed material and generates user image data (S130).

  Next, the geometric transformation parameter estimation unit 42 aligns the reference image data and the user image data with the document image data (S140). That is, using the original image data as a reference, geometric conversion parameters for the reference image data and user image data are obtained, and alignment is performed using the geometric conversion parameters.

  In this embodiment, when the geometric conversion parameter is estimated, the pixel association unit 51 uses the geometric conversion parameter to store the reference image data or user image data at the position corresponding to the pixel of the document image data. Pixels are detected and associated with each other to create pixel association data.

  If geometric conversion is applied to the reference image data and the user image data, the reference image data and the user image data are substantially in the same position as the document image data. Therefore, it is considered that pixels having the same coordinates in the reference image data and the user image data correspond to each other. In this way, the pixel association unit 51 obtains two geometric transformation parameters and associates the pixels of the reference image data with the user image data.

  In step S140, the original image data and the reference image data, and the geometric conversion parameters of the original image data and the user image data are estimated, but the geometric conversion parameters between the reference image data and the user image data are obtained. Also good. That is, by obtaining a geometric transformation parameter for matching the position of the image of the reference image data and the user image data, and performing the transformation by the geometric transformation parameter to either, the position of the image of the reference image data and the user image data Can be substantially matched.

  Note that geometric conversion is not essential, and in order to identify the pixel at the same position in the document image data and the reference image data (or user image data), coordinate conversion is performed using the geometric conversion parameter to determine whether the positions are the same. It may be substituted by judging. In other words, even if the coordinate system based on the origin of each image holds different coordinate values, the pixels with the same coordinate value as a result of geometric transformation are regarded as “pixels at the same position”. Become.

  There are cases where margins exist around the printed material obtained by outputting the document image data. In such a case, since the height and width of the margin part are included in the displacement amount of the geometric transformation, the margin part is not referred to, but a necessary area is cut out in the output image data so as to exclude the margin part, You may make the position of the origin in each image correspond.

  Next, the color difference acquisition unit 52 evaluates the user print (S150). Then, it is determined whether or not the user print section is valid (S160).

  If valid (Yes in S160), the color difference acquisition unit 52 acquires color difference data (S165). The color difference acquisition unit 52 obtains the color difference between the reference image data and the user image data for each pixel and sets it as the color difference data. Therefore, the color difference data is managed as a map having the same size as the image data, and the coordinates of the reference image data, the user image data, and the coordinates of the color difference map are associated with each other.

The method for deriving the color difference is as described in the quality evaluation of the user printed matter in the first embodiment. In addition, when the color difference is used in the evaluation of the user printed matter, this step may be omitted by using the color difference for each pixel obtained there.
The color difference data may be processed as follows.
a) Smooth map with moving average
b) Lower the map resolution (substitute with the synthesized color difference of multiple pixels)
c) If the color difference is calculated for each pixel from which the area corresponding to the contour portion in the reference image data or user image data is removed from the map, noise may be added to the color difference data. It is desirable. When removing the contour portion of the map, it is desirable to appropriately interpolate the removed portion so that the map does not become discontinuous. Conventional techniques may be used for the interpolation method. For example, smoothing processing may be used in addition to linear interpolation and cubic interpolation.

Next, the color space distance acquisition unit 53 obtains distance data (S166). That is, the distance from the achromatic color of each color used in the document image data is obtained and used as distance data. The distance data may be managed as a map having the same size as the image data, similarly to the color difference data, or may be managed as a list in which colors and distances are associated with each other. An example of the procedure for obtaining distance data is shown below.
(1) Convert the color space of the original image data to L * a * b * color space (2) Select one color from the original image data and obtain the distance d * ab from the achromatic color (3) This is the original image The following equation is an example of the definition of the distance d * ab that repeats for all colors of the data.

In the L * a * b * color space, a point where the a * value and the b * value are zero is an achromatic color, and the Euclidean distance from this achromatic point is used as the distance d * ab .

Next, the composite weight determination unit 54 determines composite weight data (S167). The combination weight determination unit 54 determines combination weight data using the color difference data and the distance data. An example of the procedure for obtaining the composite weight data is shown below. Here, the description will be made on the assumption that the color difference data and the distance data are managed as a map.
(1) After dividing the color difference data by the color difference threshold th ΔE , saturate the maximum value with 1 (assuming all weights of 1 or more are considered as 1), and create color difference weight data (2) Distance data is the distance threshold After dividing by th d , saturate the maximum value with 1 (assuming all weights greater than or equal to 1) and create distance weight data. (3) Values at coordinates with color difference data and corresponding coordinates of distance data (4) and (3) in which weights are calculated for all coordinates by multiplying by the value at, and the color difference threshold th ΔE and the distance threshold th d may be values set in advance by the user. Alternatively, a value subjected to some statistical processing such as an average value or median value of color difference data and distance data may be used. In particular, the distance threshold th d may be obtained by the following equation using the a * component value in the L * a * b * color space and the maximum values a * max and b * max that the b * component value can take.

The reason for dividing the color difference data and the distance data by the threshold value is to continuously change the weight data from zero to the threshold value. By continuously changing the weight data, it can be expected that the gradation is not discontinuous after the original image data and the corrected image data are combined. The reason why the maximum value is saturated by 1 is to keep the range of the weight data between 0 and 1.

FIG. 19A shows an example of distance weight data when the maximum value of distance is used as a threshold value. For example, in the case of obtaining the above equation the threshold th d distance, distance weight data w d is the distance d * ab continuously changes from 0 to a maximum value d * max. The maximum value d * max is 1.

FIG. 19B shows an example of distance weight data when a value smaller than d * max is adopted as the distance threshold th d . As shown in the figure, the distance weight data w d continuously changes from 0 to th d, and the distance d * ab is saturated at 1 thereafter.

Further, the color difference weight data and the distance weight data may be changed not only linearly as illustrated in FIGS. 19A and 19B but also nonlinearly using a predetermined function.
FIGS. 19C and 19D show examples of distance weight data or color difference weight data obtained by a non-linear function. Examples of functions include quadratic functions, exponential functions, and inverse functions. Further, not only functions but also information determined in advance such as a look-up table may be used to convert color differences and distances into weight data.

20A shows an example of the color difference distribution, and FIG. 20B shows an example of the color difference weight data. Here, an (n is an integer of 0 or more) indicates the position in the x direction or y direction of the map. Here, the color difference weight data will be described as the weight applied to the corrected image data at the time of synthesis. First, it is assumed that supplementary FIG. 20A is a part of certain color difference data. The value of the color difference changes according to the position, and the color difference is zero from x = a 4 to x = a 5 .

On the other hand, the color difference weight data in FIG. 20B is saturated at 1 in the range (x = 0 to x = a th ) where the value is larger than the color difference threshold th ΔE in the color difference data, and the color difference is zero. From x = a 4 to x = a 5 , the weight is zero. In the range excluding them, the weight also fluctuates as the color difference data increases or decreases.

  In FIG. 20B, the variation of the color difference data and the color difference weight data is expressed linearly, but the variation may be non-linear or there may be discontinuous portions. Further, although it is shown in one dimension for convenience of illustration, the color difference weight data also varies in accordance with the color difference data in the two-dimensional distribution. Furthermore, although the description is based on the assumption of color difference data here, the distance in the color space is almost the same, and the color difference may be read as the distance.

The combining weight decision section 54, for example, as follows obtaining combining weights W T. The color difference weighting of a certain pixel or pixel group is Wc (0 ≦ Wc ≦ 1), and the distance weight is Wd (0 ≦ Wd ≦ 1).
W T = Wc · Wd
In addition, when only the color difference weight or only the distance weight is used, it is as follows.
W T = Wc
W T = Wd
If the distance weight is not obtained for each pixel or pixel group, the distance weight associated with the pixel or pixel group color of the document image data of interest is used. Similar to the corrected image data, the composite weight determination unit 54 is managed as a map having the same size as the image data.

  Next, the difference detection unit 61 generates difference image data (S170).

  The correction processing unit 62 corrects the difference image data (S180). The correction method may be either the method of the first embodiment or the second embodiment.

Next, the image data synthesis unit synthesizes the document image data and the corrected image data (S190). A procedure for generating corrected document image data by combining document image data and corrected image data is shown below.
(1) A pixel value at a certain coordinate is acquired from the original image data. (2) A pixel value at a corresponding coordinate is acquired from the corrected image data. (3) A corresponding coordinate weight is acquired from the weight data. The two pixel values are weighted and averaged using the above weights, and the combined pixel value is determined. (5) The corresponding pixel of the document image data is updated using the combined pixel value. (6) From 1) pixel value G T after synthesis repeating 5) for all the coordinates, for example obtained in the following manner. Here, the color difference weighting of the pixel or pixel group is Wc (0 ≦ Wc ≦ 1), the distance weight is Wd (0 ≦ Wd ≦ 1), the color of the original image data is O, and the color of the corrected image data is M. .
G T = W T · M + (1−W T ) · O
In addition, when only the color difference weight or only the distance weight is used, it is as follows.
G T = Wc · M + (1−Wc) · O
G T = Wd · M + (1−Wd) · O
In the above procedure, the document image data is updated by generating corrected document image data. However, the corrected document image data may be created as new image data.

The following effects can be expected by using the above-mentioned weights when combining the original image data and the corrected image data.
a) It is possible to maintain a portion where the reference image data and the user image data originally have the same color. If correction processing is applied to the difference image data in a batch, the original color is determined between the reference image data and the user image data. As a result, the color of the portion that should have matched also changes, and even if the color matching degree is improved as a whole, the color matching degree is lowered at a predetermined portion.

  On the other hand, in the composition weight of the document image data and the corrected image data, the color difference weight is determined based on the color coincidence between the reference image data and the user image data and reflected in the composition weight, thereby reducing the degree of coincidence. Can be avoided. This is due to the following reasons.

The color difference weight is controlled so that the specific gravity of the original image data is high at a location where the degree of coincidence is high, and conversely, the specific gravity of the corrected image data is controlled at a location where the degree of coincidence is low. In the image data, the part with the high degree of coincidence is not changed, and only the part with the low degree of coincidence is reflected. B) Alleviates the problem that the achromatic color part in the original image data is illegally colored. If the differential image data is subjected to correction processing at once, a portion of the original image data that is achromatic will be colored, and the degree of coincidence at that portion will be reduced. On the other hand, in the combination weight of the document image data and the corrected image data, the matching weight is reduced by determining the distance weight based on the distance from the achromatic color of each color constituting the document image data and reflecting it in the combination weight. Can be avoided. This is due to the following reasons.

The distance weight is controlled so that the specific gravity of the original image data increases as the distance from the achromatic color decreases, and the specific gravity of the corrected image data increases as the distance from the achromatic color increases. In the image data, the portion near the achromatic color has a relatively small amount of change, and the portion far from the achromatic color has a relatively large amount of change. The color tone conversion system 610 or the MFP 700 has reached the predetermined number of times of color tone conversion ( The process in FIG. 18 ends.

  According to the present embodiment, in addition to the effects of the first embodiment, unnecessary color conversion can be suppressed from occurring in the corrected document image data by weighting the corrected image data.

DESCRIPTION OF SYMBOLS 41 Image reading part 42 Geometric transformation parameter estimation part 43 Pixel value matching part 44 Color component value matching part 45 Color tone reproduction characteristic estimation part 51 Pixel matching part 52 Color difference acquisition part 53 Color space distance acquisition part 54 Synthesis weight determination part 61 Difference Detection Unit 62 Correction Processing Unit 63 Image Composition Unit 64 Correction Parameter Determination Unit 100 Computer 200 User Printer 300 Scanner 400 Reference Printer 500 Network 601 Image Input Unit 602 Image Output Unit 603 Image Storage Unit 604 Image Analysis Unit 605 Parameter Storage Unit 606 Image processing unit 610 color tone conversion system 700 MFP
800 Projector 900 Digital camera

JP 2009-177790 A

Claims (8)

  1. An image processing apparatus for reproducing a color tone of a first output result obtained by outputting original image data by a first image output means in a second output result obtained by outputting the original image data by a second image output means. ,
    The reading device estimates the first geometric transformation parameter that aligns the position of the first output image data and the original image data read from the first output result, and the reading device reads the second output result. Geometric transformation parameter estimation means for estimating a second geometric transformation parameter for aligning the second output image data and the original image data;
    Difference image data is generated by obtaining a difference between a pixel value of a pixel or a pixel group of the first output image data and the second output image data, which is associated with the first and second geometric transformation parameters. Difference detection means;
    Correction processing means for creating corrected image data in which the pixel values of the difference image data are converted with reference to a conversion table in which the pixel values of the difference image data are associated with the converted pixel values;
    Image combining means for combining the document image data and the corrected image data by arithmetically processing pixel values of corresponding pixels of the document image data and the corrected image data;
    An image processing apparatus comprising:
  2. Using the second geometric transformation parameter, the pixel of the second output image data at a position corresponding to the pixel of the document image data is detected, and pixel value association data in which the pixel values are associated with each other A pixel value association means to create;
    A color for creating color component value association data by associating the pixel values of the pixels of the original image data associated with each other by the pixel value association data with the pixel values of the pixels of the second output image data Component value association means;
    Color tone reproduction characteristic generating means for generating color tone reproduction characteristic data for estimating the other from one of the pixel values of the second output image data and the original image data by the color component value association data;
    The correction processing means applies a pixel value of the second output image data corresponding to a pixel position of the difference image data to the tone reproduction characteristic data, calculates a correction amount of the difference image data, The image processing apparatus according to claim 1, wherein the corrected image data is generated by correcting a pixel value of the difference image data using a correction amount.
  3. Change amount calculating means for calculating a second change amount of the document image data from a first change amount of the second output image data using the color tone reproduction characteristic data;
    The change amount calculating means estimates the second change amount with respect to the first change amount of the pixel value of the pixel of the second output image data corresponding to the pixel value of the pixel of the difference image data,
    The image processing according to claim 2, wherein the correction processing unit generates the corrected image data by correcting the pixel value of the pixel of the difference image data by the estimated second change amount. apparatus.
  4. Using the second geometric transformation parameter, the second output image data pixel at a position corresponding to the pixel of the document image data is detected, and the corresponding pixel value is associated with the second pixel value correspondence Pixels for detecting attachment data and pixels of the first output image data at positions corresponding to the pixels of the document image data, and creating first pixel value association data in which those pixel values are associated with each other Value association means;
    Second color component value association data associating pixel values of the pixels of the document image data associated with each other by the second pixel value association data with pixel values of the pixels of the second output image data And a first color component value for associating the pixel value of the pixel of the original image data and the pixel value of the pixel of the first output image data associated with each other by the first pixel value association data Color component value association means for creating association data;
    Second color reproduction characteristic data for estimating one of the pixel values of the second output image data and the original image data from the second color component value association data, and the first color component Color tone reproduction characteristic estimating means for generating first color reproduction characteristic data for estimating the other from one of the pixel values of the first output image data and the original image data by value association data;
    The correction processing means applies the pixel value of the difference image data to third color reproduction characteristic data obtained by subtracting the second color reproduction characteristic data from the first color reproduction characteristic data. Correcting the pixel value of the difference image data to generate the corrected image data;
    The image processing apparatus according to claim 1.
  5. The color component value association unit generates the color component value association data after converting the document image data and the second output image data into a device-independent color space.
    The image processing apparatus according to claim 2 .
  6. Pixel associating means for obtaining a correspondence relationship between the first output image data and the second output image data according to the first and second geometric transformation parameters;
    Color difference information acquisition means for acquiring color difference information for each pixel or pixel block of the first output image data and the second output image data for each pixel or pixel block associated by the correspondence relationship, Distance information acquisition means for acquiring distance information from the achromatic color of the document image data, the first output image data, or the second output image data for each pixel block or color;
    Weight determining means for determining a weight value based on at least one of the color difference information and the distance information for each pixel or pixel block of the corrected image data;
    Image data combining means for combining the document image data and the corrected image data by arithmetically processing the pixel values of the corresponding pixels of the corrected image data weighted by the weight values. An image processing apparatus according to claim 1, comprising:
  7. Quantifying the degree of coincidence between the first output image data and the second output image data, and determining whether color conversion is necessary based on the degree of coincidence;
    The image processing apparatus according to claim 1, wherein the image processing apparatus is an image processing apparatus.
  8. Second image output means for outputting the original image data output as the first output result by the first image output means as the second output result, and the first output result and the second output result are read. An image processing system having a reading device and an information processing device that performs image processing,
    The reading device estimates the first geometric transformation parameter that aligns the position of the first output image data and the original image data read from the first output result, and the reading device reads the second output result. Geometric transformation parameter estimation means for estimating a second geometric transformation parameter for aligning the second output image data and the original image data;
    Difference image data is generated by obtaining a difference between a pixel value of a pixel or a pixel group of the first output image data and the second output image data, which is associated with the first and second geometric transformation parameters. Difference detection means;
    Correction processing means for creating corrected image data in which the pixel values of the difference image data are converted with reference to a conversion table in which the pixel values of the difference image data are associated with the converted pixel values;
    Image combining means for combining the document image data and the corrected image data by arithmetically processing pixel values of corresponding pixels of the document image data and the corrected image data;
    An image processing system comprising:
JP2011157181A 2011-07-15 2011-07-15 Image processing apparatus and image processing system Active JP5760785B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011157181A JP5760785B2 (en) 2011-07-15 2011-07-15 Image processing apparatus and image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011157181A JP5760785B2 (en) 2011-07-15 2011-07-15 Image processing apparatus and image processing system

Publications (2)

Publication Number Publication Date
JP2013026693A JP2013026693A (en) 2013-02-04
JP5760785B2 true JP5760785B2 (en) 2015-08-12

Family

ID=47784606

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011157181A Active JP5760785B2 (en) 2011-07-15 2011-07-15 Image processing apparatus and image processing system

Country Status (1)

Country Link
JP (1) JP5760785B2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6237117B2 (en) * 2013-10-28 2017-11-29 セイコーエプソン株式会社 Image processing apparatus, robot system, image processing method, and image processing program
JP6384268B2 (en) * 2014-10-28 2018-09-05 株式会社リコー Image processing apparatus, image processing method, and image processing system
JP6342824B2 (en) * 2015-01-26 2018-06-13 富士フイルム株式会社 Color conversion table creation device, color conversion table creation method, and color conversion table creation program
JP6398783B2 (en) * 2015-02-26 2018-10-03 ブラザー工業株式会社 Color unevenness detection method, head adjustment method using this color unevenness detection method, and color unevenness inspection apparatus
JP6339962B2 (en) * 2015-03-31 2018-06-06 富士フイルム株式会社 Image processing apparatus and method, and program
JP6428454B2 (en) * 2015-04-08 2018-11-28 株式会社リコー Image processing apparatus, image processing method, and image processing system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07505511A (en) * 1992-04-02 1995-06-15
JP4313523B2 (en) * 2001-08-16 2009-08-12 富士フイルム株式会社 Image data output device and image data output program
JP2006094040A (en) * 2004-09-22 2006-04-06 Dainippon Screen Mfg Co Ltd Profile generating device, program, and profile generating method

Also Published As

Publication number Publication date
JP2013026693A (en) 2013-02-04

Similar Documents

Publication Publication Date Title
CN105706434B (en) Color-conversion table producing device and method
US6671067B1 (en) Scanner and printer profiling system
JP4234281B2 (en) Printing system
US9088673B2 (en) Image registration
KR100300950B1 (en) Method and apparatus for correcting color
US9088745B2 (en) Apparatus, system, and method of inspecting image, and recording medium storing image inspection control program
US7783122B2 (en) Banding and streak detection using customer documents
EP0967791B1 (en) Image processing method, apparatus and memory medium therefor
US7340092B2 (en) Image processing device, image processing method, program for executing image processing, and computer readable recording medium on which the program is stored
US7952757B2 (en) Production of color conversion profile for printing
US6441923B1 (en) Dynamic creation of color test patterns based on variable print settings for improved color calibration
US7227990B2 (en) Color image processing device and color image processing method
US5642202A (en) Scan image target locator system for calibrating a printing system
DE60031910T2 (en) Accurate color image reproduction of colors within the hue area and improved color image reproduction of colors outside the hue area
US7330600B2 (en) Image processing device estimating black character color and ground color according to character-area pixels classified into two classes
JP5053273B2 (en) Method for printing an image on a receiving medium
AU758295B2 (en) Automatic margin alignment
JP4491129B2 (en) Color gamut mapping method and apparatus using local area information
EP1156668B1 (en) Black generation for color management system
US8175155B2 (en) Image processing apparatus, image processing method, image processing program, and storage medium
US6522425B2 (en) Method of predicting and processing image fine structures
JP4890974B2 (en) Image processing apparatus and image processing method
JP4194289B2 (en) Image processing method
US6381037B1 (en) Dynamic creation of color test patterns for improved color calibration
US8325396B2 (en) Color management apparatus, color management method and computer readable medium recording color management program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20140610

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20150303

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20150408

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20150512

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20150525

R151 Written notification of patent or utility model registration

Ref document number: 5760785

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151