WO2019193900A1 - Printed matter inspection device, printed matter inspection method, and printed matter inspection program - Google Patents

Printed matter inspection device, printed matter inspection method, and printed matter inspection program Download PDF

Info

Publication number
WO2019193900A1
WO2019193900A1 PCT/JP2019/008636 JP2019008636W WO2019193900A1 WO 2019193900 A1 WO2019193900 A1 WO 2019193900A1 JP 2019008636 W JP2019008636 W JP 2019008636W WO 2019193900 A1 WO2019193900 A1 WO 2019193900A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
printed matter
neural network
matter inspection
output image
Prior art date
Application number
PCT/JP2019/008636
Other languages
French (fr)
Japanese (ja)
Inventor
一谷 修司
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Publication of WO2019193900A1 publication Critical patent/WO2019193900A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B41PRINTING; LINING MACHINES; TYPEWRITERS; STAMPS
    • B41JTYPEWRITERS; SELECTIVE PRINTING MECHANISMS, i.e. MECHANISMS PRINTING OTHERWISE THAN FROM A FORME; CORRECTION OF TYPOGRAPHICAL ERRORS
    • B41J29/00Details of, or accessories for, typewriters or selective printing mechanisms not otherwise provided for
    • B41J29/38Drives, motors, controls or automatic cut-off devices for the entire printing mechanism
    • B41J29/393Devices for controlling or analysing the entire machine ; Controlling or analysing mechanical parameters involving printing of test patterns
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03GELECTROGRAPHY; ELECTROPHOTOGRAPHY; MAGNETOGRAPHY
    • G03G21/00Arrangements not provided for by groups G03G13/00 - G03G19/00, e.g. cleaning, elimination of residual charge
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a printed matter inspection apparatus, a printed matter inspection method, and a printed matter inspection program.
  • detection of the waste paper is performed.
  • One is a method of detecting a dirty paper by extracting a difference between scanned images obtained by scanning a reference printed material as a reference and a printed material to be inspected.
  • the other is a method for detecting a waste paper by extracting a difference between RIP data serving as a reference and a scanned image obtained by scanning a printed matter to be inspected.
  • Patent Document 1 As a technique related to the former, there is one described in Patent Document 1 below. That is, pattern matching between a multi-tone line image obtained by imaging a surface to be inspected of a sheet-like printed matter with a line sensor and a reference master multi-tone line image previously obtained by a line sensor. Then, the density levels of the two are compared. And the part of the to-be-inspected surface corresponding to the part where the density level difference between the two exceeds the allowable value is determined as a defect.
  • Patent Document 2 The technology related to the latter is described in Patent Document 2 below. That is, the master image generated from the print job and the image to be inspected formed on the paper by the print job are aligned and collated to extract the difference, and the inspection is performed when the difference exceeds a predetermined threshold value.
  • the target image is determined to be a defective image, and the paper is reprinted.
  • the alignment is performed in two stages. In the first stage, the entire images are divided into a plurality of blocks, and the positions of both images are corrected so that the degree of coincidence of the positions of the markers superimposed on the images in the plurality of regions around the both images is the highest. In the next stage, blocks containing abundant edge components in the images in the block are selected as suitable blocks for alignment, and the positions of both images are corrected so that the similarity in the selected block is the highest. .
  • the former method has a problem that the amount of work increases because it requires creation and reading of a reference print.
  • the change in the image due to the formation and reading of the image on the paper is included in the printed matter to be inspected, but the change is not included in the reference RIP data. For this reason, there is a problem that the change has a relatively large effect on the difference between the RIP data serving as a reference and the scanned image obtained by scanning the printed matter to be inspected, and the detection accuracy of the waste paper is degraded. .
  • an object of the present invention is to provide a printed matter inspection apparatus, a printed matter inspection method, and a printed matter inspection program capable of improving the detection accuracy of the waste paper without increasing the work amount.
  • a simulation unit that simulates an image of the changed RIP data as an estimated output image from an input image of RIP data, and an image formed on a recording medium by the image forming apparatus based on the input image
  • a printed matter inspection apparatus comprising: a comparison unit that compares the output image obtained by reading with the estimated output image; and a specifying unit that identifies an abnormal storage medium based on a comparison result by the comparison unit .
  • the change in the image applied to the learning output image is a learned model neural network that has been learned in advance so as to output the estimated output image reproduced for the learning input image.
  • the change in the image is a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and light distribution unevenness, and the neural network performs gradation, color reproduction.
  • the learning input image of the chart for reproducing any one of the characteristics, sharpness, noise, density, and uneven light distribution, and the learning input image in which the learning input image of the chart is changed.
  • the neural network uses a combination of the learning input image of a character chart and the learning output image obtained by changing the image to the learning input image of the character chart.
  • the printed matter inspection apparatus according to (3) which is a neural network of a learned model learned in advance.
  • the simulation unit includes a learned model for detecting deterioration in specific image quality, and a learned model for reproducing the change for each usage time of the image forming apparatus and the reading apparatus.
  • the estimated output images are respectively simulated from the common input images, and the plurality of estimated output images obtained by the simulation by the simulation unit are simulated.
  • the estimated output image having the smallest difference from the output image obtained by reading the image formed on the recording medium by the image forming device based on the common input image with the reading device.
  • the determination unit further includes a determination unit that determines a Dell neural network, and the simulation unit simulates the estimated output image from the input image by the neural network of the learned model determined by the determination unit. ) Printed material inspection apparatus.
  • a printed matter inspection method a printed matter inspection method.
  • the neural network includes a combination of the input image and the output image obtained by reading the image formed on the recording medium by the image forming apparatus based on the input image by the reading apparatus.
  • the printed matter inspection method according to (6) which is a neural network of a learned model previously learned using the teacher data.
  • the change is a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and light distribution unevenness, and the neural network has gradation, color reproducibility.
  • Learned beforehand using teacher data of a combination of the input image of the chart for reproducing sharpness, noise, density, and uneven light distribution, and the output image corresponding to the input image of the chart The printed matter inspection method according to (7), which is a neural network of a learned model.
  • the neural network is a learned model neural network that has been learned in advance using teacher data of a combination of the input image of the character chart and the output image corresponding to the input image of the character chart.
  • the learned model for detecting deterioration in specific image quality, and the change for each usage time of at least one of the image forming apparatus and the reading apparatus are respectively reproduced.
  • the estimated output images are respectively simulated from the common input images by the respective neural networks of the learned models, and among the plurality of estimated output images obtained by the simulation in the step (a), the common
  • the method further includes a step (d) of determining the estimated output image having a small difference from the output image corresponding to the input image, wherein the step (b) is performed by the image forming apparatus based on the common input image.
  • the output image obtained by reading an image formed on a recording medium by the reading device; and Wherein comparing the estimated output image determined by the tough, printed matter inspection method according to (6).
  • a printed matter inspection program for executing the printed matter inspection method described in any of (6) to (10) above by a computer.
  • the image after the change is simulated from the input image of the RIP data by the neural network of the learned model learned in advance so as to reproduce the change of the image due to the formation of the image on the recording medium and the reading of the image. Then, the image obtained by the simulation and the image obtained by reading the image formed on the printed material to be inspected are compared, and the waste paper is specified based on the comparison result. Thereby, the detection accuracy of the waste paper can be improved without increasing the work amount.
  • 1 is a schematic diagram illustrating a configuration of an image forming apparatus including a printed matter inspection apparatus according to a first embodiment.
  • 1 is a block diagram illustrating a configuration of an image forming apparatus. It is a block diagram which shows the function of the control part at the time of learning of a printed matter inspection apparatus. It is a block diagram which shows the function of the control part at the time of learning of the printed matter inspection apparatus which showed the example of the input image for learning, the output image for learning, and the estimated output image concretely. It is a block diagram which shows the function of the control part at the time of learning of the printed matter inspection apparatus which showed the other example of the learning input image, the learning output image, and the estimated output image concretely.
  • FIG. 1 is a schematic diagram illustrating a configuration of an image forming apparatus including a printed matter inspection apparatus according to the first embodiment.
  • FIG. 2 is a block diagram illustrating a configuration of the image forming apparatus.
  • the image forming apparatus 100 includes a control unit 110, a storage unit 120, a communication unit 130, an operation display unit 140, an image reading unit 150, an image control unit 160, and an image forming unit 170. These components are communicably connected to each other via a bus 180.
  • the image forming apparatus 100 can be configured by an MFP (Multi Function Peripheral).
  • the control unit 110 constitutes a printed material inspection apparatus.
  • the control unit 110 includes a CPU (Central Processing Unit) and various memories, and controls the above-described units and performs various arithmetic processes according to a program. Details of the function of the control unit 110 will be described later.
  • CPU Central Processing Unit
  • the storage unit 120 is configured by an SDD (Solid State Drive), an HDD (Hard Disc Drive), or the like, and stores various programs and various data.
  • SDD Solid State Drive
  • HDD Hard Disc Drive
  • the communication unit 130 is an interface for performing communication between the image forming apparatus 100 and an external device.
  • a network interface based on a standard such as Ethernet (registered trademark), SATA, or IEEE1394 is used.
  • various local connection interfaces such as a wireless communication interface such as Bluetooth (registered trademark) or IEEE802.11 are used.
  • the operation display unit 140 includes a touch panel, a numeric keypad, a start button, a stop button, and the like, and is used for displaying various information and inputting various instructions.
  • the image reading unit 150 constitutes a reading device, and includes a light source such as a fluorescent lamp and an image sensor such as a CCD (Charge Coupled Device) image sensor.
  • the image reading unit 150 applies light from a light source to a document set at a predetermined reading position, photoelectrically converts the reflected light with an image sensor, and generates image data from the electric signal.
  • the image control unit 160 performs layout processing and rasterization processing of print data included in the print job received by the communication unit 130, and generates bitmap format image data.
  • the print job is a general term for a print command for the image forming apparatus 100 and includes print data and print settings.
  • the print data is data of a document to be printed, and the print data may include various data such as image data, vector data, and text data.
  • the print data may be PDL (Page Description Language) data, PDF (Portable Document Format) data, or TIFF (Tagged Image File Format) data.
  • the print settings are settings related to image formation on paper, and include various settings such as the number of pages, the number of copies to be printed, paper type, color or monochrome, and page allocation.
  • the image forming unit 170 includes an image forming unit 40, a fixing unit 50, a paper feeding unit 60, and a paper conveying unit 70.
  • the image forming unit 40 includes image forming units 41Y, 41M, 41C, and 41K corresponding to toners of respective colors of Y (yellow), M (magenta), C (cyan), and K (black).
  • the toner images formed through the charging, exposure, and development processes based on the image data by the image forming units 41Y, 41M, 41C, and 41K are sequentially superimposed on the intermediate transfer belt 42 to perform secondary transfer.
  • the image is transferred onto the paper 900 by the roller 43.
  • the fixing unit 50 includes a heating roller 51 and a pressure roller 52, and heats and presses the paper 900 conveyed to the fixing nip of both the rollers 51 and 52, and melts and fixes the toner image on the paper 900 on the surface. To do.
  • the paper 900 on which the toner image is fixed by the fixing unit 50 is discharged to the paper discharge tray 190 as a printed material (output product).
  • the paper feed unit 60 has a plurality of paper feed trays 61 and 62, and sends out the paper 900 stored in the paper feed trays 61 and 62 one by one to the downstream transport path.
  • the paper transport unit 70 includes a plurality of transport rollers for transporting the paper 900, and transports the paper 900 between the image forming unit 40, the fixing unit 50, and the paper feeding unit 60.
  • the plurality of transport rollers include a registration roller 71 for correcting the inclination of the paper 900 and a loop roller 72 for forming a predetermined amount of loop on the paper 900.
  • the paper transport unit 70 discharges the paper 900 on which an image has been formed to the paper discharge tray 90.
  • control unit 110 Details of the function of the control unit 110 will be described.
  • FIG. 3 is a block diagram illustrating functions of the control unit during learning of the printed matter inspection apparatus. As described above, since the control unit 110 configures the printed matter inspection apparatus, the main body during learning of the printed matter inspection apparatus will be described as a printed matter inspection apparatus.
  • the printed matter inspection apparatus includes a first encoder 111, a feature conversion unit 112, a decoder 113, and a second encoder 114. Each of these components can be constituted by a neural network.
  • the first encoder 111, the feature conversion unit 112, and the decoder 113 constitute the simulation unit 10.
  • the second encoder 114 is necessary only when the printed material inspection apparatus is learning, and is not necessary during printed material inspection described later. For this reason, after learning, the second encoder may not be mounted on the printed matter inspection apparatus.
  • the printed matter inspection apparatus learns to reproduce an image change caused by the formation of an image on the paper 900 by the image forming unit 170 and the reading of the formed image by the image reading unit 150.
  • the change in the image due to the formation of the image on the paper 900 by the image forming unit 170 and the reading of the formed image by the image reading unit 150 will be referred to as “specific change”.
  • the printed matter inspection apparatus after learning can simulate the image data that has changed in particular as an estimated output image.
  • the specific change includes, for example, an image change due to noise of an optical system at the time of forming a latent image and a change in size of the paper 900 at the time of fixing a toner image when the image forming unit 170 forms an image on the paper 900.
  • image changes due to noise in the optical system at the time of image reading by the image reading unit 150 are included.
  • the printed matter inspection apparatus learns, as learning data, a combination of the learning input image 500 of RIP data and the learning output image 600 obtained by adding a specific change to the learning input image 500.
  • the specific change includes a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and uneven light distribution.
  • the gradation property is, for example, a characteristic of change in color shade or smoothness of change in shade.
  • the color reproducibility is a characteristic indicating the degree of reproduction of the original color, for example.
  • Sharpness is, for example, a characteristic of image clarity.
  • the uneven light distribution is, for example, that the luminous intensity distribution with respect to the image space is not uniform.
  • the first encoder 111 receives a learning input image 500 of RIP data and a learning output image 600 obtained by adding a specific change to the learning input image as learning data.
  • the learning input image 500 is, for example, RIP data of the content A bitmap format.
  • the learning input image 500 is assumed to be image data obtained by rasterizing print data included in a print job by the image control unit 160. Therefore, the learning input image 500 is an image before the image is formed on the paper 900 and the image reading unit 150 reads the formed image, and thus does not include a specific change.
  • the learning output image 600 is formed on the paper 900 by the image forming unit 170 based on the image data obtained by rasterizing the print data included in the print job by the image control unit 160, and the image is read by the image reading unit 150. And the like obtained by being read by. Therefore, the learning output image 600 includes a specific change.
  • the first encoder 111 extracts the feature of the content A from the learning input image 500 and extracts the feature of the specific change included in the learning output image 600 from the learning output image 600.
  • the feature conversion unit 12 performs conversion to add the feature of the specific change to the feature of the content A based on the feature of the content A of the learning input image 500 and the feature of the specific change included in the learning output image 600. Thereby, the feature conversion unit 112 calculates the feature of the image in which the specific change is added to the content A.
  • the decoder 113 reproduces, as the estimated output image 550, an image in which the specific change is added to the content A from the feature in which the specific change feature is added to the feature of the content A.
  • the second encoder 114 extracts, from the estimated output image 550, the feature of the image in which the specific change is added to the content A.
  • the printed matter inspection apparatus includes a feature calculated by the feature conversion unit 112 in which a specific change feature is added to the content A feature, and an image feature that is extracted by the second encoder 114 and has a specific change added to the content A.
  • the first loss L1 based on the difference is calculated.
  • the second encoder 114 extracts the feature of the specific change included in the estimated output image 550 from the estimated output image 550 as the first feature.
  • the second encoder 114 extracts the feature of the specific change included in the learning output image 600 from the learning output image 600 as the second feature.
  • the printed matter inspection apparatus calculates a second loss L2 based on the difference between the first feature and the second feature.
  • the printed matter inspection apparatus calculates the sum of the first loss L1 and the second loss L2 as a total loss, and the first encoder 11, the feature conversion unit 12, the decoder 13, and the second encoder 14 so that the total loss is minimized. Is learned by the error back-propagation method.
  • the total loss may be a sum after the first loss L1 and the second loss L2 are appropriately weighted.
  • FIG. 4 is a block diagram showing functions of the control unit during learning of the printed matter inspection apparatus, specifically showing examples of the learning input image, the learning output image, and the estimated output image.
  • the contents of the learning input image 501 are, for example, four circles including a circle that is not colored inside and three circles that are colored differently inside.
  • the content of the learning output image 601 is the same as the content of the learning input image 501, for example.
  • the learning output image 601 includes a gray color as a whole including the content as the specific change.
  • the estimated output image 551 is an image in which the specific change included in the learning output image 601 is reflected in the content of the learning input image 501.
  • the estimated output image 551 has a gray color on the entire image including the contents of four circles including a circle without a color inside and three circles with a color having a different density inside. Becomes an image reflected as a specific change.
  • FIG. 5 is a block diagram showing functions of the control unit during learning of the printed matter inspection apparatus, specifically showing another example of the learning input image, the learning output image, and the estimated output image.
  • the content of the learning input image 502 is, for example, the same content as the content A including four circles including a circle without a color inside and three circles with a color having a different density inside.
  • the content of the learning output image 602 is, for example, the content of four circles (content B) with black color inside, unlike the content of the learning input image 502 (content A).
  • the learning output image 602 includes a gray color as a whole including the content as the specific change.
  • the estimated output image 552 is an image in which the specific change included in the learning output image 602 is reflected in the content of the learning input image 502.
  • the estimated output image 552 has a gray color on the entire image including the contents of four circles including a circle without a color inside and three circles with a color having a different density inside. Becomes an image reflected as a specific change. That is, in the example of FIG. 4 and the example of FIG. 5, the estimated output image 550 is the same.
  • FIG. 6 is a diagram illustrating an image of image data of a character chart and an image obtained by reading an image formed on a sheet based on the image data.
  • the left figure is an image of image data of a character chart
  • the right figure is an image obtained by reading an image formed on a sheet based on the image data of a character chart.
  • the simulation unit 10 learns using learning data of a combination of a learning input image 500 for a character chart and a learning output image 600 in which a specific change is reflected in the character chart. It is possible to effectively improve the precision of detecting a waste paper relating to a printed matter including characters.
  • FIG. 7 is a block diagram showing functions of the control unit during printed matter inspection of the printed matter inspection apparatus. As described above, since the control unit 110 configures the printed matter inspection apparatus, the main body in the printed matter inspection will be described below as the printed matter inspection apparatus as in the learning. In FIG. 7, the image forming unit 170 and the image reading unit 150 are also shown for ease of explanation.
  • the printed matter inspection apparatus includes a first encoder 111, a feature conversion unit 112, a decoder 113, an alignment unit 115, a comparison unit 116, and a specifying unit 117.
  • the first encoder 111, the feature conversion unit 112, and the decoder 113 are previously learned to reproduce the specific change.
  • the first encoder 111, the feature conversion unit 112, and the decoder 113 constitute the simulation unit 10.
  • the inspection target of the printed matter inspection is an inspection target image 513 formed on the paper 900.
  • Image data that is RIP data in bitmap format obtained by rasterizing print data included in a print job for outputting the inspection target image 513 as a printed material is input to the first encoder. 111 is input.
  • the content of the input image 503 is content C.
  • the first encoder 111 extracts the feature of the content C from the input image 503.
  • the first encoder 111 outputs the feature of the specific change to the feature conversion unit 112 together with the feature of the extracted content C.
  • the first encoder 111 is able to extract and output the feature of the content C and output the feature of specific change by learning in advance.
  • the feature conversion unit 112 performs conversion to add a feature of specific change to the feature of the content C of the input image 503.
  • the decoder 113 reproduces, as the estimated output image 553, an image in which the specific change is added to the content C from the features obtained by the conversion by the feature conversion unit 112.
  • the input image 503 is formed on the paper 900 by the image forming unit 170 to become an inspection target image (image on the paper) 513.
  • the inspection target image 513 becomes an output image (read image) 523 by being read by the image reading unit 150 for printed matter inspection.
  • the alignment unit 115 aligns the output image 523 and the estimated output image 553 by a known method using a marker such as a so-called “dragonfly” used at the time of bookbinding.
  • the comparison unit 116 compares the aligned output image 523 with the estimated output image 553 that is the reference image. For example, the comparison unit 116 compares the estimated output image 553 and the output image 523 by calculating a difference with respect to at least one of brightness, hue, and saturation between corresponding pixels by being aligned. Also good.
  • the identifying unit 117 determines whether or not the paper (printed material) 900 on which the inspection target image 513 is formed is scraped paper based on the comparison result by the comparing unit 116. Thereby, the specifying unit 117 specifies the waste paper.
  • the specifying unit 117 determines that there is a pixel in which the difference calculated by the comparison unit 116 exceeds a preset threshold value, the specifying unit 117 can determine that the paper 900 on which the output image 523 including the pixel is read is a waste paper.
  • the threshold value can be set based on the correlation obtained in advance by experiment or the like to obtain a correlation between the magnitude of the difference such as brightness between the pixels described above and the case where the difference is determined to be paper.
  • FIG. 8 is a block diagram showing the functions of the simulation unit at the time of printed matter inspection, specifically showing an example of an input image and an estimated output image of RIP data input to the simulation unit.
  • the content of the input image 504 is content C of four circles including, for example, a circle that is not colored inside and three circles that are colored differently inside.
  • the first encoder 111, the feature conversion unit 112, and the decoder 113 constituting the simulation unit 10 are learned in advance so as to reproduce the gray color in the entire image including the content as a specific change. ing.
  • the simulation unit 102 calculates four circles including a circle that is not colored inside and three circles that are colored differently inside.
  • An estimated output image 554 in which the gray color is reflected as a specific change in the entire image including the content C is simulated and output.
  • control unit 110 The operation of the control unit 110 will be described.
  • FIG. 9 is a flowchart showing the operation of the control unit 110 during learning of the printed matter inspection apparatus. This flowchart can be executed by the control unit 110 according to a program.
  • the control unit 110 acquires a combination of the learning input image 500 and the learning output image 600 as learning data (S101).
  • the control unit 110 can acquire the learning data by reading the learning data stored in the storage unit 120 in advance.
  • the control unit 110 uses the first encoder 111 to extract the feature of the content of the learning input image 500 and the feature of the specific change included in the learning output image 600 from the learning data (S102).
  • the control unit 110 causes the feature conversion unit 112 to perform conversion for adding the specific change feature to the content feature extracted in step S102 (S103).
  • the control unit 110 reproduces the estimated output image 550 by the decoder 113 from the characteristics obtained by the conversion in step S104 (S104).
  • the control unit 110 calculates a difference between the feature obtained by the conversion in step S104 and the feature of the estimated output image 550 extracted from the estimated output image 550 by the second encoder 114 as the first loss L1 (S105).
  • the control unit 110 extracts the feature of the specific change from the learning output image 600 acquired in step S101 as the first feature (S106).
  • the control unit 110 extracts the feature of the specific change from the estimated output image 550 as the second feature (S107).
  • the control unit 110 calculates a difference between the first feature and the second feature as a second loss (S108).
  • the control unit 110 learns the simulation unit 10 and the second encoder 114 so that the sum of the first loss L1 and the second loss L2 is minimized (S109).
  • FIG. 10 is a flowchart showing the operation of the control unit during printed matter inspection of the printed matter inspection apparatus. This flowchart can be executed by the control unit 110 according to a program.
  • the control unit 110 simulates the estimated output image 553 from the input image 503 of the RIP data by the learned simulation unit 10 (S201).
  • control unit 110 Based on the input image 503, the control unit 110 forms an inspection target image 513 on the paper 900 by the image forming unit 170, and acquires the output image 523 by reading the formed inspection target image 513 by the image reading unit 150. (S202).
  • the control unit 110 uses the comparison unit 116 to compare the estimated output image 553 and the output image 523 (S203).
  • the control unit 110 determines whether the difference between the estimated output image 553 and the output image 523 exceeds the threshold value based on the comparison result by the comparison unit 116 by the specifying unit 117 (S204).
  • the control unit 110 determines that the difference between the estimated output image 553 and the output image 523 exceeds the threshold (S204: YES)
  • the control unit 110 identifies the paper 900 on which the output image 523 has been read as a waste paper (S205).
  • the control unit 110 determines that the difference between the estimated output image 553 and the output image 523 does not exceed the threshold value (S204: NO)
  • the process ends.
  • FIG. 11 is a block diagram illustrating functions of a control unit during printed matter inspection of the printed matter inspection apparatus according to the second embodiment.
  • the alignment unit 115 since the alignment unit 115 is not provided, the estimated output image 553 and the output image 523 are not aligned. As a result, when it is less necessary to align the estimated output image 553 and the output image 523, such as when a single-color page is printed, the alignment processing is omitted, so that the calculation in the printed matter inspection is performed. The amount can be suppressed.
  • the simulation unit 10 has a plurality of neural networks 21 to 25 having different learned models, and determines the neural networks 21 to 25 that simulate the estimated output image having the smallest difference from the output image. . Then, the estimated output image 553 is simulated from the input image 503 by the determined neural networks 21 to 25 at the time of printed matter inspection. Since the other points are the same as those in the first embodiment, redundant description is omitted.
  • FIG. 12 is a block diagram illustrating functions of a control unit during printed matter inspection of the printed matter inspection apparatus according to the third embodiment.
  • the simulation unit 10 has a plurality of neural networks 21 to 25. Each of these neural networks 21 to 25 corresponds to the neural network including the first encoder 111, the feature converter 112, and the decoder 113 in the first embodiment.
  • the first neural network 21 is a learned model neural network that has been trained to detect deterioration of a specific image quality A.
  • the second neural network 22 is a learned model neural network that has been trained to detect deterioration of a specific image quality B.
  • the third neural network 23 is a learned model neural network that has been trained to detect deterioration in specific image quality C.
  • the image quality A, the image quality B, and the image quality C are different image quality, and may be, for example, at least one of gradation, color reproducibility, sharpness, noise, density, and uneven light distribution.
  • the fourth neural network 24 is, for example, a neural network of a learned model corresponding to the current usage time among the learned models learned to reproduce the specific change for each usage time of the image forming unit 170. That is, this is a neural network of a learned model that has been learned to reproduce a specific change corresponding to the usage time of the image forming unit 170 up to now.
  • the usage time up to the present time can be calculated based on, for example, a cumulative value of the number of printed sheets stored in the storage unit 120.
  • the fifth neural network 25 is a neural network of a learned model corresponding to the current usage time among the learned models learned to reproduce a specific change for each usage time of the image reading unit 150.
  • this is a neural network of a learned model learned to reproduce a specific change corresponding to the usage time of the image reading unit 150 up to now.
  • the usage time up to the present time can be calculated based on, for example, a cumulative value of the number of read sheets stored in the storage unit 120.
  • the determination unit 118 acquires a plurality of estimated output images 553 obtained by simulation with the neural networks 21 to 25.
  • the determination unit 118 outputs the image obtained by reading the image formed on the paper 900 by the image forming unit 170 based on the common input image 503 among the plurality of acquired estimated output images 553.
  • the estimated output image 553 having the smallest difference from the image is specified.
  • the determination unit 118 determines the neural networks 21 to 25 that simulate the specified estimated output image 553.
  • the simulation unit 10 uses the determined neural networks 21 to 25 to simulate the estimated output image 553 from the input image 503 at the time of printed matter inspection.
  • the image after the specific change is simulated from the input image of the RIP data by the neural network of the learned model that has been learned in advance so as to reproduce the specific change due to the formation of the image on the recording medium and the reading of the image. Then, the image obtained by the simulation and the image obtained by reading the image formed on the printed material to be inspected are compared, and the waste paper is specified based on the comparison result. Thereby, the detection accuracy of the waste paper can be improved without increasing the work amount.
  • the learning output image The specific change applied to is learned in advance so as to output an estimated output image reproduced for the learning input image.
  • the specific change is a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and light distribution unevenness.
  • the neural network is applied to a learning input image for a chart for reproducing any one of gradation, color reproducibility, sharpness, noise, density, and uneven light distribution, and a learning input image for the chart.
  • a neural network of a learned model that has been learned in advance using a combination of the output image for learning to which the change of the image has been added is used. Thereby, the detection accuracy of the waste paper can be improved more efficiently.
  • the neural network is trained in advance using a combination of a learning input image of a character chart and a learning output image in which an image change is added to the learning input image of the character chart.
  • the model is a neural network.
  • the simulation unit has a plurality of neural networks of different learned models, and determines a neural network that simulates an estimated output image having the smallest difference from the output image. Then, the estimated output image is simulated from the input image by the determined neural network.
  • the estimated output image can be simulated by the neural network of the learned model having the highest simulation accuracy of the estimated output image among the learned models learned from the viewpoint of the degree of deterioration of the current device and the deterioration of the specific image quality. Therefore, the accuracy of detecting the waste paper can be further effectively improved.
  • the present invention is not limited to the above-described embodiment.
  • learning for simulating a specific change is performed by learning with a neural network using a simulation unit and a second encoder, and the specific change is simulated by a simulation unit after learning.
  • learning for simulating a specific change may be performed using a neural network having another structure, and the specific change may be simulated by a simulation unit after learning.
  • learning for simulating a specific change may be performed by machine learning without using a neural network, and the specific change may be simulated by a simulation unit after learning.
  • a fine abnormality satisfying the product standard may be determined, and the paper having the abnormality may be specified.
  • the paper is described as an example of the storage medium.
  • the recording medium is not limited to the paper, and may be a resin film or the like.
  • processing executed by the program in the embodiment may be executed by replacing with hardware such as a circuit.

Abstract

[Problem] To provide a printed matter inspection device with which it is possible to improve the accuracy of detecting waste paper without causing a workload to increase. [Solution] The present invention has: a simulation unit that, by a neural network of learned models that were previously learned so as to reproduce a change in an image due to the formation of an image on a recording medium and reading of the formed image, simulates an image of RIP data after the change as an estimated output image from an input image of RIP data; a comparison unit for comparing the estimated output image and an output image obtained due to reading of the image formed on the recording medium on the basis of the input image; and a specification unit for specifying waste paper on the basis of the comparison result.

Description

印刷物検査装置、印刷物検査方法、および印刷物検査プログラムPrinted matter inspection apparatus, printed matter inspection method, and printed matter inspection program
 本発明は、印刷物検査装置、印刷物検査方法、および印刷物検査プログラムに関する。 The present invention relates to a printed matter inspection apparatus, a printed matter inspection method, and a printed matter inspection program.
 画像形成装置により画像形成された印刷物の検査においてヤレ紙の検出が行われる。ヤレ紙の検出方法には主に2つの方法がある。一方は、基準となる参照印刷物、および検査対象の印刷物をそれぞれスキャンして得られたスキャン画像同士の差分を抽出することで汚れ等があるヤレ紙を検出する方法である。他方は、基準となるRIPデータと、検査対象の印刷物をスキャンして得られたスキャン画像との差分を抽出することでヤレ紙を検出する方法である。 In the inspection of the printed matter imaged by the image forming apparatus, detection of the waste paper is performed. There are mainly two methods for detecting the waste paper. One is a method of detecting a dirty paper by extracting a difference between scanned images obtained by scanning a reference printed material as a reference and a printed material to be inspected. The other is a method for detecting a waste paper by extracting a difference between RIP data serving as a reference and a scanned image obtained by scanning a printed matter to be inspected.
 前者に関連する技術としては、下記特許文献1に記載されたものがある。すなわち、シート状の印刷物の被検査面をラインセンサーにより撮像した多階調のライン画像と、あらかじめラインセンサーにより撮像しておいた、基準となるマスターの多階調のライン画像とをパターンマチングし、両者の濃度レベルを比較する。そして、両者の濃度レベル差が許容値を超えた部分に対応する被検査面の部分を欠陥と判定する。 As a technique related to the former, there is one described in Patent Document 1 below. That is, pattern matching between a multi-tone line image obtained by imaging a surface to be inspected of a sheet-like printed matter with a line sensor and a reference master multi-tone line image previously obtained by a line sensor. Then, the density levels of the two are compared. And the part of the to-be-inspected surface corresponding to the part where the density level difference between the two exceeds the allowable value is determined as a defect.
 後者に関連する技術としては、下記特許文献2に記載されたものがある。すなわち、印刷ジョブから生成したマスター画像と、当該印刷ジョブにより用紙上に形成された検査対象画像とを位置合わせをした後に照合して差分を抽出し、差分が所定の閾値を超えた場合に検査対象画像が欠陥画像であると判定し、当該用紙を再印刷する。位置合わせは、2つの段階で行う。最初の段階では、両画像全体を複数のブロックに分割し、両画像周辺部の複数の領域において画像に重畳させたマーカーの位置の一致度が最も高くなるように両画像の位置を補正する。次の段階では、ブロック内の画像にエッジ成分を豊富に含むブロック等を位置合わせに好適なブロックとして選択し、選択されたブロック内の類似度が最も高くなるように両画像の位置を補正する。 The technology related to the latter is described in Patent Document 2 below. That is, the master image generated from the print job and the image to be inspected formed on the paper by the print job are aligned and collated to extract the difference, and the inspection is performed when the difference exceeds a predetermined threshold value. The target image is determined to be a defective image, and the paper is reprinted. The alignment is performed in two stages. In the first stage, the entire images are divided into a plurality of blocks, and the positions of both images are corrected so that the degree of coincidence of the positions of the markers superimposed on the images in the plurality of regions around the both images is the highest. In the next stage, blocks containing abundant edge components in the images in the block are selected as suitable blocks for alignment, and the positions of both images are corrected so that the similarity in the selected block is the highest. .
特開平6-201611号公報JP-A-6-201611 特開2013-186562号公報JP 2013-186562 A
 しかし、前者の方法は、参照印刷物の作成および読取りが必要になるため、作業量が多くなるという問題がある。後者の方法は、用紙への画像の形成および読取りによる画像の変化が検査対象の印刷物には含まれる一方で、基準となるRIPデータに当該変化が含まれない。このため、基準となるRIPデータと、検査対象の印刷物をスキャンすることで得られたスキャン画像との差分に、当該変化が比較的大きく影響し、ヤレ紙の検出精度を劣化させるという問題がある。 However, the former method has a problem that the amount of work increases because it requires creation and reading of a reference print. In the latter method, the change in the image due to the formation and reading of the image on the paper is included in the printed matter to be inspected, but the change is not included in the reference RIP data. For this reason, there is a problem that the change has a relatively large effect on the difference between the RIP data serving as a reference and the scanned image obtained by scanning the printed matter to be inspected, and the detection accuracy of the waste paper is degraded. .
 本発明は、このような問題を解決するためになされたものである。すなわち、作業量を増大させることなく、ヤレ紙の検出精度を向上できる、印刷物検査装置、印刷物検査方法、および印刷物検査プログラムを提供することを目的とする。 The present invention has been made to solve such problems. That is, an object of the present invention is to provide a printed matter inspection apparatus, a printed matter inspection method, and a printed matter inspection program capable of improving the detection accuracy of the waste paper without increasing the work amount.
 本発明の上記課題は、以下の手段によって解決される。 The above-mentioned problem of the present invention is solved by the following means.
 (1)画像形成装置による記録媒体への画像の形成と、形成された前記画像の読取装置による読取りと、による前記画像の変化を再現するようにあらかじめ学習された学習済モデルのニューラルネットワークにより、RIPデータの入力画像から、前記変化後のRIPデータの画像を、推定出力画像としてシミュレーションするシミュレーション部と、前記入力画像に基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた出力画像と、前記推定出力画像とを比較する比較部と、前記比較部による比較結果に基づいて、異常の記憶媒体を特定する特定部と、を有する印刷物検査装置。 (1) By a neural network of a learned model that has been learned in advance so as to reproduce the change in the image due to the formation of an image on a recording medium by the image forming device and the reading of the formed image by the reading device, A simulation unit that simulates an image of the changed RIP data as an estimated output image from an input image of RIP data, and an image formed on a recording medium by the image forming apparatus based on the input image A printed matter inspection apparatus comprising: a comparison unit that compares the output image obtained by reading with the estimated output image; and a specifying unit that identifies an abnormal storage medium based on a comparison result by the comparison unit .
 (2)前記ニューラルネットワークは、RIPデータの学習用入力画像と、前記学習用入力画像に前記画像の変化が加えられた学習用出力画像と、の組合せの学習用データが入力されたときに、前記学習用出力画像に加えられた前記画像の変化が、前記学習用入力画像に対して再現された前記推定出力画像を出力するようにあらかじめ学習された学習済モデルのニューラルネットワークである、上記(1)に記載の印刷物検査装置。 (2) When the learning data of the combination of the learning input image of the RIP data and the learning output image in which the change of the image is added to the learning input image is input to the neural network, The change in the image applied to the learning output image is a learned model neural network that has been learned in advance so as to output the estimated output image reproduced for the learning input image. The printed matter inspection apparatus according to 1).
 (3)前記画像の変化は、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラの少なくともいずれかの画質の変化であり、前記ニューラルネットワークは、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラのいずれかを再現するためのチャートの前記学習用入力画像と、前記チャートの前記学習用入力画像に前記画像の変化が加えられた前記学習用出力画像と、の組合せを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、上記(2)に記載の印刷物検査装置。 (3) The change in the image is a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and light distribution unevenness, and the neural network performs gradation, color reproduction. The learning input image of the chart for reproducing any one of the characteristics, sharpness, noise, density, and uneven light distribution, and the learning input image in which the learning input image of the chart is changed The printed matter inspection apparatus according to (2) above, which is a neural network of a learned model previously learned using a combination with an output image.
 (4)前記ニューラルネットワークは、文字のチャートの前記学習用入力画像と、前記文字のチャートの前記学習用入力画像に前記画像の変化が加えられた前記学習用出力画像と、の組合せを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、上記(3)に記載の印刷物検査装置。 (4) The neural network uses a combination of the learning input image of a character chart and the learning output image obtained by changing the image to the learning input image of the character chart. The printed matter inspection apparatus according to (3), which is a neural network of a learned model learned in advance.
 (5)前記シミュレーション部は、特定の画質の劣化を検出するための前記学習済モデル、ならびに、前記画像形成装置および前記読取装置の使用時間ごとの前記変化をそれぞれ再現するための学習済モデルのうち現在の使用時間に対応する学習済モデル、の複数のニューラルネットワークにより、共通の前記入力画像から前記推定出力画像をそれぞれシミュレーションし、前記シミュレーション部によるシミュレーションにより得られた複数の前記推定出力画像のうち、前記共通の前記入力画像に基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた前記出力画像との差異が最も小さい前記推定出力画像を特定し、特定した前記推定出力画像をシミュレーションした前記学習済モデルのニューラルネットワークを決定する決定部をさらに有し、前記シミュレーション部は、前記決定部により決定された前記学習済モデルのニューラルネットワークにより、前記入力画像から前記推定出力画像をシミュレーションする、上記(1)に記載の印刷物検査装置。 (5) The simulation unit includes a learned model for detecting deterioration in specific image quality, and a learned model for reproducing the change for each usage time of the image forming apparatus and the reading apparatus. Among the plurality of neural networks of the learned models corresponding to the current usage time, the estimated output images are respectively simulated from the common input images, and the plurality of estimated output images obtained by the simulation by the simulation unit are simulated. Among these, the estimated output image having the smallest difference from the output image obtained by reading the image formed on the recording medium by the image forming device based on the common input image with the reading device. The learned and identified and simulated simulated output image The determination unit further includes a determination unit that determines a Dell neural network, and the simulation unit simulates the estimated output image from the input image by the neural network of the learned model determined by the determination unit. ) Printed material inspection apparatus.
 (6)画像形成装置による記録媒体への画像の形成と、形成された前記画像の読取装置による読み取りとによる前記画像の変化を、前記変化を再現するようにあらかじめ学習された学習済モデルのニューラルネットワークにより、RIPデータの入力画像から、前記変化後のRIPデータの出力画像を推定出力画像としてシミュレーションする段階(a)と、前記入力画像に基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた前記出力画像と、前記推定出力画像とを比較する段階(b)と、前記段階(b)における比較結果に基づいて、異常の記録媒体を特定する段階(c)と、を有する印刷物検査方法。 (6) A neural model of a learned model that has been learned in advance so as to reproduce the change in the image due to the formation of the image on the recording medium by the image forming apparatus and the reading of the formed image by the reading apparatus. (A) simulating the output image of the changed RIP data as an estimated output image from the input image of the RIP data by a network, and an image formed on the recording medium by the image forming apparatus based on the input image (B) comparing the output image obtained by being read by the reader and the estimated output image, and identifying an abnormal recording medium based on the comparison result in the step (b) And (c) a printed matter inspection method.
 (7)前記ニューラルネットワークは、前記入力画像と、前記入力画像に基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた前記出力画像との組合せの教師データを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、上記(6)に記載の印刷物検査方法。 (7) The neural network includes a combination of the input image and the output image obtained by reading the image formed on the recording medium by the image forming apparatus based on the input image by the reading apparatus. The printed matter inspection method according to (6), which is a neural network of a learned model previously learned using the teacher data.
 (8)前記変化は、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラの少なくともいずれかの、画質の変化であり、前記ニューラルネットワークは、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラをそれぞれ再現するためのチャートの前記入力画像と、前記チャートの前記入力画像に対応する前記出力画像との組合せの教師データを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、上記(7)に記載の印刷物検査方法。 (8) The change is a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and light distribution unevenness, and the neural network has gradation, color reproducibility. , Learned beforehand using teacher data of a combination of the input image of the chart for reproducing sharpness, noise, density, and uneven light distribution, and the output image corresponding to the input image of the chart The printed matter inspection method according to (7), which is a neural network of a learned model.
 (9)前記ニューラルネットワークは、文字のチャートの前記入力画像と、前記文字のチャートの前記入力画像に対応する前記出力画像との組合せの教師データを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、上記(8)に記載の印刷物検査方法。 (9) The neural network is a learned model neural network that has been learned in advance using teacher data of a combination of the input image of the character chart and the output image corresponding to the input image of the character chart. The printed matter inspection method according to (8) above.
 (10)前記段階(a)は、特定の画質の劣化を検出するための前記学習済モデル、ならびに前記画像形成装置および前記読取装置の少なくともいずれかの使用時間ごとの前記変化をそれぞれ再現するための学習済モデル、のそれぞれのニューラルネットワークにより、共通の前記入力画像から前記推定出力画像をそれぞれシミュレーションし、前記段階(a)におけるシミュレーションにより得られた複数の前記推定出力画像のうち、前記共通の前記入力画像に対応する前記出力画像との差分が小さい前記推定出力画像を決定する段階(d)をさらに有し、前記段階(b)は、前記共通の前記入力画像基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた前記出力画像と、前記決定部により決定された前記推定出力画像とを比較する、上記(6)に記載の印刷物検査方法。 (10) In the step (a), the learned model for detecting deterioration in specific image quality, and the change for each usage time of at least one of the image forming apparatus and the reading apparatus are respectively reproduced. The estimated output images are respectively simulated from the common input images by the respective neural networks of the learned models, and among the plurality of estimated output images obtained by the simulation in the step (a), the common The method further includes a step (d) of determining the estimated output image having a small difference from the output image corresponding to the input image, wherein the step (b) is performed by the image forming apparatus based on the common input image. The output image obtained by reading an image formed on a recording medium by the reading device; and Wherein comparing the estimated output image determined by the tough, printed matter inspection method according to (6).
 (11)上記(6)~(10)のいずれかに記載の印刷物検査方法をコンピュータにより実行するための印刷物検査プログラム。 (11) A printed matter inspection program for executing the printed matter inspection method described in any of (6) to (10) above by a computer.
 記録媒体への画像の形成および画像の読み取りによる画像の変化を再現するようにあらかじめ学習された学習済モデルのニューラルネットワークにより、RIPデータの入力画像から、当該変化後の画像をシミュレーションする。そして、シミュレーションにより得られた画像と、検査対象である印刷物に形成された画像が読み取られることで得られた画像とを比較し、比較結果に基づいてヤレ紙を特定する。これにより、作業量を増大させることなく、ヤレ紙の検出精度を向上できる。 The image after the change is simulated from the input image of the RIP data by the neural network of the learned model learned in advance so as to reproduce the change of the image due to the formation of the image on the recording medium and the reading of the image. Then, the image obtained by the simulation and the image obtained by reading the image formed on the printed material to be inspected are compared, and the waste paper is specified based on the comparison result. Thereby, the detection accuracy of the waste paper can be improved without increasing the work amount.
第1実施形態に係る印刷物検査装置を含む画像形成装置の構成を示す概略図である。1 is a schematic diagram illustrating a configuration of an image forming apparatus including a printed matter inspection apparatus according to a first embodiment. 画像形成装置の構成を示すブロック図である。1 is a block diagram illustrating a configuration of an image forming apparatus. 印刷物検査装置の学習時の制御部の機能を示すブロック図である。It is a block diagram which shows the function of the control part at the time of learning of a printed matter inspection apparatus. 学習用入力画像、学習用出力画像、および推定出力画像の例を具体的に示した、印刷物検査装置の学習時の制御部の機能を示すブロック図である。It is a block diagram which shows the function of the control part at the time of learning of the printed matter inspection apparatus which showed the example of the input image for learning, the output image for learning, and the estimated output image concretely. 学習用入力画像、学習用出力画像、および推定出力画像の他の例を具体的に示した、印刷物検査装置の学習時の制御部の機能を示すブロック図である。It is a block diagram which shows the function of the control part at the time of learning of the printed matter inspection apparatus which showed the other example of the learning input image, the learning output image, and the estimated output image concretely. 文字のチャートの画像データの画像と、当該画像データに基づいて用紙に形成された画像が読み取られることで得られた画像とを示す図である。It is a figure which shows the image of the image data of a character chart, and the image obtained by reading the image formed on the paper based on the said image data. 印刷物検査装置の印刷物検査時の制御部の機能を示すブロック図である。It is a block diagram which shows the function of the control part at the time of printed matter inspection of a printed matter inspection apparatus. シミュレーション部に入力されるRIPデータの入力画像と推定出力画像の例を具体的に示した、印刷物検査時のシミュレーション部の機能を示すブロック図である。It is a block diagram which shows the function of the simulation part at the time of printed matter inspection which showed the example of the input image and estimated output image of RIP data input into a simulation part concretely. 印刷物検査装置の学習時の制御部110の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the control part 110 at the time of learning of a printed matter inspection apparatus. 印刷物検査装置の印刷物検査時の制御部110の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the control part 110 at the time of printed matter inspection of a printed matter inspection apparatus. 第2実施形態に係る印刷物検査装置の印刷物検査時の制御部の機能を示すブロック図である。It is a block diagram which shows the function of the control part at the time of the printed matter inspection of the printed matter inspection apparatus which concerns on 2nd Embodiment. 第3実施形態の印刷物検査装置の印刷物検査時の制御部の機能を示すブロック図である。It is a block diagram which shows the function of the control part at the time of the printed matter inspection of the printed matter inspection apparatus of 3rd Embodiment.
 以下、図面を参照して、本発明の実施形態に係る、印刷物検査装置、印刷物検査方法、および印刷物検査プログラムについて説明する。なお、図面において、同一の要素には同一の符号を付し、重複する説明を省略する。また、図面の寸法比率は、説明の都合上誇張されており、実際の比率とは異なる場合がある。 Hereinafter, a printed matter inspection apparatus, a printed matter inspection method, and a printed matter inspection program according to an embodiment of the present invention will be described with reference to the drawings. In the drawings, the same elements are denoted by the same reference numerals, and redundant description is omitted. In addition, the dimensional ratios in the drawings are exaggerated for convenience of explanation, and may be different from the actual ratios.
 (第1実施形態)
 図1は、第1実施形態に係る印刷物検査装置を含む画像形成装置の構成を示す概略図である。図2は、画像形成装置の構成を示すブロック図である。
(First embodiment)
FIG. 1 is a schematic diagram illustrating a configuration of an image forming apparatus including a printed matter inspection apparatus according to the first embodiment. FIG. 2 is a block diagram illustrating a configuration of the image forming apparatus.
 画像形成装置100は、制御部110、記憶部120、通信部130、操作表示部140、画像読取部150、画像制御部160、および画像形成部170を有する。これらの構成要素は、バス180により互いに通信可能に連結されている。画像形成装置100は、MFP(MultiFunction Peripheral)により構成され得る。制御部110は印刷物検査装置を構成する。 The image forming apparatus 100 includes a control unit 110, a storage unit 120, a communication unit 130, an operation display unit 140, an image reading unit 150, an image control unit 160, and an image forming unit 170. These components are communicably connected to each other via a bus 180. The image forming apparatus 100 can be configured by an MFP (Multi Function Peripheral). The control unit 110 constitutes a printed material inspection apparatus.
 制御部110は、CPU(Central Processing Unit)および各種メモリを備えており、プログラムにしたがって上記各部の制御や各種の演算処理を行う。制御部110の機能の詳細については後述する。 The control unit 110 includes a CPU (Central Processing Unit) and various memories, and controls the above-described units and performs various arithmetic processes according to a program. Details of the function of the control unit 110 will be described later.
 記憶部120は、SDD(Solid State Drive)またはHDD(Hard Disc Drive)等により構成され、各種プログラムおよび各種データを記憶する。 The storage unit 120 is configured by an SDD (Solid State Drive), an HDD (Hard Disc Drive), or the like, and stores various programs and various data.
 通信部130は、画像形成装置100と外部機器との間で通信を行うためのインターフェースである。通信部130として、イーサネット(登録商標)、SATA、IEEE1394などの規格によるネットワークインターフェースが用いられる。また、通信部130として、Bluetooth(登録商標)、IEEE802.11などの無線通信インターフェースなどの各種ローカル接続インターフェースなどが用いられる。 The communication unit 130 is an interface for performing communication between the image forming apparatus 100 and an external device. As the communication unit 130, a network interface based on a standard such as Ethernet (registered trademark), SATA, or IEEE1394 is used. As the communication unit 130, various local connection interfaces such as a wireless communication interface such as Bluetooth (registered trademark) or IEEE802.11 are used.
 操作表示部140は、タッチパネル、テンキー、スタートボタン、およびストップボタン等を備えており、各種情報の表示および各種指示の入力に使用される。 The operation display unit 140 includes a touch panel, a numeric keypad, a start button, a stop button, and the like, and is used for displaying various information and inputting various instructions.
 画像読取部150は、読取装置を構成し、蛍光ランプなどの光源およびCCD(Charge Coupled Device)イメージセンサーなどの撮像素子を有する。画像読取部150は、所定の読み取り位置にセットされた原稿に光源から光を当て、その反射光を撮像素子で光電変換して、その電気信号から画像データを生成する。 The image reading unit 150 constitutes a reading device, and includes a light source such as a fluorescent lamp and an image sensor such as a CCD (Charge Coupled Device) image sensor. The image reading unit 150 applies light from a light source to a document set at a predetermined reading position, photoelectrically converts the reflected light with an image sensor, and generates image data from the electric signal.
 画像制御部160は、通信部130により受信された印刷ジョブ等に含まれる印刷データのレイアウト処理およびラスタライズ処理を行い、ビットマップ形式の画像データを生成する。 The image control unit 160 performs layout processing and rasterization processing of print data included in the print job received by the communication unit 130, and generates bitmap format image data.
 印刷ジョブとは、画像形成装置100に対する印刷命令の総称であり、印刷データおよび印刷設定が含まれる。印刷データとは、印刷の対象である文書のデータであり、印刷データには、たとえば、イメージデータ、ベクタデータ、テキストデータといった各種データが含まれ得る。具体的には、印刷データは、PDL(Page Description Language)データ、PDF(Portable Document Format)データまたはTIFF(Tagged Image File Format)データであり得る。印刷設定とは、用紙への画像形成に関する設定であり、たとえば、ページ数、印刷部数、紙種、カラーまたはモノクロの選択、およびページ割付などの各種設定が含まれる。 The print job is a general term for a print command for the image forming apparatus 100 and includes print data and print settings. The print data is data of a document to be printed, and the print data may include various data such as image data, vector data, and text data. Specifically, the print data may be PDL (Page Description Language) data, PDF (Portable Document Format) data, or TIFF (Tagged Image File Format) data. The print settings are settings related to image formation on paper, and include various settings such as the number of pages, the number of copies to be printed, paper type, color or monochrome, and page allocation.
 画像形成部170は、作像部40、定着部50、給紙部60、および用紙搬送部70を有する。 The image forming unit 170 includes an image forming unit 40, a fixing unit 50, a paper feeding unit 60, and a paper conveying unit 70.
 作像部40は、Y(イエロー)、M(マゼンタ)、C(シアン)、およびK(ブラック)の各色のトナーに対応した作像ユニット41Y、41M、41C、41Kを有する。各作像ユニット41Y、41M、41C、41Kにより、画像データに基づいて、帯電、露光、および現像のプロセスを経て形成されたトナー画像は、中間転写ベルト42上に順次重ねられて、2次転写ローラー43により用紙900上に転写される。 The image forming unit 40 includes image forming units 41Y, 41M, 41C, and 41K corresponding to toners of respective colors of Y (yellow), M (magenta), C (cyan), and K (black). The toner images formed through the charging, exposure, and development processes based on the image data by the image forming units 41Y, 41M, 41C, and 41K are sequentially superimposed on the intermediate transfer belt 42 to perform secondary transfer. The image is transferred onto the paper 900 by the roller 43.
 定着部50は、加熱ローラー51および加圧ローラー52を有し、両ローラー51、52の定着ニップに搬送された用紙900を加熱および加圧して、用紙900上のトナー画像をその表面に溶融定着する。 The fixing unit 50 includes a heating roller 51 and a pressure roller 52, and heats and presses the paper 900 conveyed to the fixing nip of both the rollers 51 and 52, and melts and fixes the toner image on the paper 900 on the surface. To do.
 定着部50によりトナー画像が定着された用紙900は、印刷物(出力物)として排紙トレイ190に排紙される。 The paper 900 on which the toner image is fixed by the fixing unit 50 is discharged to the paper discharge tray 190 as a printed material (output product).
 給紙部60は、複数の給紙トレイ61、62を有し、給紙トレイ61、62に収容された用紙900を1枚ずつ下流側の搬送経路に送り出す。 The paper feed unit 60 has a plurality of paper feed trays 61 and 62, and sends out the paper 900 stored in the paper feed trays 61 and 62 one by one to the downstream transport path.
 用紙搬送部70は、用紙900を搬送するための複数の搬送ローラーを有し、作像部40、定着部50、および給紙部60の各部間で用紙900を搬送する。複数の搬送ローラーには、用紙900の傾きを矯正するためのレジストローラー71や、用紙900に所定量のループを形成するためのループローラー72が含まれる。 The paper transport unit 70 includes a plurality of transport rollers for transporting the paper 900, and transports the paper 900 between the image forming unit 40, the fixing unit 50, and the paper feeding unit 60. The plurality of transport rollers include a registration roller 71 for correcting the inclination of the paper 900 and a loop roller 72 for forming a predetermined amount of loop on the paper 900.
 用紙搬送部70は、画像形成された用紙900を排紙トレイ90に排紙する。 The paper transport unit 70 discharges the paper 900 on which an image has been formed to the paper discharge tray 90.
 制御部110の機能の詳細について説明する。 Details of the function of the control unit 110 will be described.
 図3は、印刷物検査装置の学習時の制御部の機能を示すブロック図である。上述したように、制御部110は印刷物検査装置を構成するため、以下、印刷物検査装置の学習時の主体を印刷物検査装置として説明する。 FIG. 3 is a block diagram illustrating functions of the control unit during learning of the printed matter inspection apparatus. As described above, since the control unit 110 configures the printed matter inspection apparatus, the main body during learning of the printed matter inspection apparatus will be described as a printed matter inspection apparatus.
 印刷物検査装置は、第1エンコーダー111、特徴変換部112、デコーダー113、および第2エンコーダー114を有する。これらの構成要素はそれぞれニューラルネットワークにより構成され得る。第1エンコーダー111、特徴変換部112、およびデコーダー113は、シミュレーション部10を構成する。なお、第2エンコーダー114は、印刷物検査装置の学習時のみに必要となり、後述する印刷物検査時には不要となる。このため、学習後は、第2エンコーダーは印刷物検査装置に実装されなくてもよい。 The printed matter inspection apparatus includes a first encoder 111, a feature conversion unit 112, a decoder 113, and a second encoder 114. Each of these components can be constituted by a neural network. The first encoder 111, the feature conversion unit 112, and the decoder 113 constitute the simulation unit 10. The second encoder 114 is necessary only when the printed material inspection apparatus is learning, and is not necessary during printed material inspection described later. For this reason, after learning, the second encoder may not be mounted on the printed matter inspection apparatus.
 印刷物検査装置は、画像形成部170による用紙900への画像の形成と、形成された画像の画像読取部150による読取りと、による画像の変化を再現するための学習をする。以下、画像形成部170による用紙900への画像の形成と、形成された画像の画像読取部150による読取りと、による画像の変化を、「特定変化」と称する。これにより、学習後の印刷物検査装置は、特定変化した画像データを推定出力画像としてシミュレーションできる。特定変化には、たとえば、画像形成部170による用紙900への画像の形成の際の、潜像形成時の光学系のノイズによる画像の変化やトナー画像の定着時の用紙900のサイズの変化、ならびに画像読取部150による画像の読取り時の光学系のノイズによる画像の変化等が含まれる。 The printed matter inspection apparatus learns to reproduce an image change caused by the formation of an image on the paper 900 by the image forming unit 170 and the reading of the formed image by the image reading unit 150. Hereinafter, the change in the image due to the formation of the image on the paper 900 by the image forming unit 170 and the reading of the formed image by the image reading unit 150 will be referred to as “specific change”. Thereby, the printed matter inspection apparatus after learning can simulate the image data that has changed in particular as an estimated output image. The specific change includes, for example, an image change due to noise of an optical system at the time of forming a latent image and a change in size of the paper 900 at the time of fixing a toner image when the image forming unit 170 forms an image on the paper 900. In addition, image changes due to noise in the optical system at the time of image reading by the image reading unit 150 are included.
 印刷物検査装置は、RIPデータの学習用入力画像500と、当該学習用入力画像500に特定変化が加えられた学習用出力画像600と、の組合せを学習用データとして学習する。特定変化には、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラの少なくともいずれかの画質の変化が含まれる。階調性とは、たとえば色の濃淡の変化、または濃淡変化の滑らかさの特性である。色再現性とは、たとえばオリジナルの色の再現の程度を示す特性である。先鋭性とは、たとえば画像の明瞭さの特性である。配光ムラとは、たとえば画像空間に対する光度分布が均一でなくなっている様である。 The printed matter inspection apparatus learns, as learning data, a combination of the learning input image 500 of RIP data and the learning output image 600 obtained by adding a specific change to the learning input image 500. The specific change includes a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and uneven light distribution. The gradation property is, for example, a characteristic of change in color shade or smoothness of change in shade. The color reproducibility is a characteristic indicating the degree of reproduction of the original color, for example. Sharpness is, for example, a characteristic of image clarity. The uneven light distribution is, for example, that the luminous intensity distribution with respect to the image space is not uniform.
 第1エンコーダー111には、RIPデータの学習用入力画像500と、当該学習用入力画像に特定変化が加えられた学習用出力画像600とが、学習用データとして入力される。学習用入力画像500は、たとえばコンテンツAのビットマップ形式のRIPデータである。学習用入力画像500は、印刷ジョブに含まれる印刷データが画像制御部160によりラスタライズ処理されることで得られる画像データ等を想定している。したがって、学習用入力画像500は、用紙900への画像の形成や形成された画像の画像読取部150による読取りが行われる前の画像であるため、特定変化は含まれていない。学習用出力画像600は、印刷ジョブに含まれる印刷データが画像制御部160によりラスタライズ処理されて得られる画像データに基づいて画像形成部170により用紙900に画像が形成され当該画像が画像読取部150により読取られて得られる画像等である。したがって、学習用出力画像600には特定変化が含まれている。 The first encoder 111 receives a learning input image 500 of RIP data and a learning output image 600 obtained by adding a specific change to the learning input image as learning data. The learning input image 500 is, for example, RIP data of the content A bitmap format. The learning input image 500 is assumed to be image data obtained by rasterizing print data included in a print job by the image control unit 160. Therefore, the learning input image 500 is an image before the image is formed on the paper 900 and the image reading unit 150 reads the formed image, and thus does not include a specific change. The learning output image 600 is formed on the paper 900 by the image forming unit 170 based on the image data obtained by rasterizing the print data included in the print job by the image control unit 160, and the image is read by the image reading unit 150. And the like obtained by being read by. Therefore, the learning output image 600 includes a specific change.
 第1エンコーダー111は、学習用入力画像500からコンテンツAの特徴を抽出するとともに、学習用出力画像600から学習用出力画像600に含まれる特定変化の特徴を抽出する。 The first encoder 111 extracts the feature of the content A from the learning input image 500 and extracts the feature of the specific change included in the learning output image 600 from the learning output image 600.
 特徴変換部12は、学習用入力画像500のコンテンツAの特徴と学習用出力画像600に含まれる特定変化の特徴とに基づいて、コンテンツAの特徴に特定変化の特徴を加える変換を行う。これにより、特徴変換部112は、コンテンツAに特定変化が加わった画像の特徴を算出する。 The feature conversion unit 12 performs conversion to add the feature of the specific change to the feature of the content A based on the feature of the content A of the learning input image 500 and the feature of the specific change included in the learning output image 600. Thereby, the feature conversion unit 112 calculates the feature of the image in which the specific change is added to the content A.
 デコーダー113は、コンテンツAの特徴に特定変化の特徴が加わった特徴から、コンテンツAに特定変化が加わった画像を、推定出力画像550として再現する。 The decoder 113 reproduces, as the estimated output image 550, an image in which the specific change is added to the content A from the feature in which the specific change feature is added to the feature of the content A.
 第2エンコーダー114は、推定出力画像550から、コンテンツAに特定変化が加わった画像の特徴を抽出する。 The second encoder 114 extracts, from the estimated output image 550, the feature of the image in which the specific change is added to the content A.
 印刷物検査装置は、特徴変換部112により算出された、コンテンツAの特徴に特定変化の特徴が加わった特徴と、第2エンコーダー114により抽出された、コンテンツAに特定変化が加わった画像の特徴との差異に基づく第1ロスL1を算出する。 The printed matter inspection apparatus includes a feature calculated by the feature conversion unit 112 in which a specific change feature is added to the content A feature, and an image feature that is extracted by the second encoder 114 and has a specific change added to the content A. The first loss L1 based on the difference is calculated.
 第2エンコーダー114は、推定出力画像550から推定出力画像550に含まれる特定変化の特徴を第1特徴として抽出する。第2エンコーダー114は、学習用出力画像600から学習用出力画像600に含まれる特定変化の特徴を第2特徴として抽出する。 The second encoder 114 extracts the feature of the specific change included in the estimated output image 550 from the estimated output image 550 as the first feature. The second encoder 114 extracts the feature of the specific change included in the learning output image 600 from the learning output image 600 as the second feature.
 印刷物検査装置は、第1特徴と第2特徴との差異に基づく第2ロスL2を算出する。 The printed matter inspection apparatus calculates a second loss L2 based on the difference between the first feature and the second feature.
 印刷物検査装置は、第1ロスL1と第2ロスL2の和を総合ロスとして算出し、総合ロスが最も小さくなるように、第1エンコーダー11、特徴変換部12、デコーダー13、および第2エンコーダー14を誤差逆伝播法により学習させる。なお、総合ロスは、第1ロスL1および第2ロスL2に適切な重み付けがなされた後の和であってもよい。 The printed matter inspection apparatus calculates the sum of the first loss L1 and the second loss L2 as a total loss, and the first encoder 11, the feature conversion unit 12, the decoder 13, and the second encoder 14 so that the total loss is minimized. Is learned by the error back-propagation method. The total loss may be a sum after the first loss L1 and the second loss L2 are appropriately weighted.
 図4は、学習用入力画像、学習用出力画像、および推定出力画像の例を具体的に示した、印刷物検査装置の学習時の制御部の機能を示すブロック図である。 FIG. 4 is a block diagram showing functions of the control unit during learning of the printed matter inspection apparatus, specifically showing examples of the learning input image, the learning output image, and the estimated output image.
 学習用入力画像501のコンテンツは、たとえば、内部に色が付されていない円、および内部に濃度の異なる色が付された3つの円を含む4つの円である。学習用出力画像601のコンテンツは、たとえば、学習用入力画像501のコンテンツと同じである。学習用出力画像601には、特定変化としてコンテンツを含む全体にグレーの色を含む。この場合、推定出力画像551は、学習用入力画像501のコンテンツに、学習用出力画像601に含まれる特定変化が反映された画像となる。具体的には、推定出力画像551は、内部に色が付されていない円、および内部に濃度の異なる色が付された3つの円を含む4つの円のコンテンツを含む画像全体にグレーの色が特定変化として反映された画像となる。 The contents of the learning input image 501 are, for example, four circles including a circle that is not colored inside and three circles that are colored differently inside. The content of the learning output image 601 is the same as the content of the learning input image 501, for example. The learning output image 601 includes a gray color as a whole including the content as the specific change. In this case, the estimated output image 551 is an image in which the specific change included in the learning output image 601 is reflected in the content of the learning input image 501. Specifically, the estimated output image 551 has a gray color on the entire image including the contents of four circles including a circle without a color inside and three circles with a color having a different density inside. Becomes an image reflected as a specific change.
 図5は、学習用入力画像、学習用出力画像、および推定出力画像の他の例を具体的に示した、印刷物検査装置の学習時の制御部の機能を示すブロック図である。 FIG. 5 is a block diagram showing functions of the control unit during learning of the printed matter inspection apparatus, specifically showing another example of the learning input image, the learning output image, and the estimated output image.
 学習用入力画像502のコンテンツは、たとえば、内部に色が付されていない円、および内部に濃度の異なる色が付された3つの円を含む4つの円を含むコンテンツAと同じコンテンツである。学習用出力画像602のコンテンツは、たとえば、学習用入力画像502のコンテンツ(コンテンツA)とは異なり、内部に黒色が付された4つの円のコンテンツ(コンテンツB)である。学習用出力画像602には、特定変化としてコンテンツを含む全体にグレーの色を含む。この場合、推定出力画像552は、学習用入力画像502のコンテンツに、学習用出力画像602に含まれる特定変化が反映された画像となる。具体的には、推定出力画像552は、内部に色が付されていない円、および内部に濃度の異なる色が付された3つの円を含む4つの円のコンテンツを含む画像全体にグレーの色が特定変化として反映された画像となる。すなわち、図4の例と図5の例において、推定出力画像550は同じものになる。 The content of the learning input image 502 is, for example, the same content as the content A including four circles including a circle without a color inside and three circles with a color having a different density inside. The content of the learning output image 602 is, for example, the content of four circles (content B) with black color inside, unlike the content of the learning input image 502 (content A). The learning output image 602 includes a gray color as a whole including the content as the specific change. In this case, the estimated output image 552 is an image in which the specific change included in the learning output image 602 is reflected in the content of the learning input image 502. Specifically, the estimated output image 552 has a gray color on the entire image including the contents of four circles including a circle without a color inside and three circles with a color having a different density inside. Becomes an image reflected as a specific change. That is, in the example of FIG. 4 and the example of FIG. 5, the estimated output image 550 is the same.
 図6は、文字のチャートの画像データの画像と、当該画像データに基づいて用紙に形成された画像が読み取られることで得られた画像とを示す図である。図6において、左側の図が文字のチャートの画像データの画像であり、右側の図が文字のチャートの画像データに基づいて用紙に形成された画像が読み取られることで得られた画像である。 FIG. 6 is a diagram illustrating an image of image data of a character chart and an image obtained by reading an image formed on a sheet based on the image data. In FIG. 6, the left figure is an image of image data of a character chart, and the right figure is an image obtained by reading an image formed on a sheet based on the image data of a character chart.
 図6の例に示すように、用紙900への画像の形成および用紙900上の画像が読み取られることにより、文字のエッジが鈍り、色も変化する。また、たとえば文字の近くにノイズがのると、数字の桁が変わる等、文字による情報を大きく棄損させる可能性がある。文字のチャートの学習用入力画像500と、文字のチャートに特定変化を反映させた学習用出力画像600との組合せの学習データを用いてシミュレーション部10を学習させることにより、後述する印刷物検査において、文字を含む印刷物に関するヤレ紙検出精度を効果的に向上させることができる。 As shown in the example of FIG. 6, when the image is formed on the paper 900 and the image on the paper 900 is read, the edge of the character becomes dull and the color changes. Further, for example, if noise is placed near the character, there is a possibility that information by the character may be largely discarded, such as changing the digit of a number. In the printed matter inspection to be described later, the simulation unit 10 learns using learning data of a combination of a learning input image 500 for a character chart and a learning output image 600 in which a specific change is reflected in the character chart. It is possible to effectively improve the precision of detecting a waste paper relating to a printed matter including characters.
 図7は、印刷物検査装置の印刷物検査時の制御部の機能を示すブロック図である。上述したように、制御部110は印刷物検査装置を構成するため、以下、印刷物検査における主体を、学習時と同様に、印刷物検査装置として説明する。なお、図7においては、説明を簡単にするために、画像形成部170および画像読取部150も併せて示されている。 FIG. 7 is a block diagram showing functions of the control unit during printed matter inspection of the printed matter inspection apparatus. As described above, since the control unit 110 configures the printed matter inspection apparatus, the main body in the printed matter inspection will be described below as the printed matter inspection apparatus as in the learning. In FIG. 7, the image forming unit 170 and the image reading unit 150 are also shown for ease of explanation.
 印刷物検査装置は、第1エンコーダー111、特徴変換部112、デコーダー113、位置合せ部115、比較部116、および特定部117を有する。 The printed matter inspection apparatus includes a first encoder 111, a feature conversion unit 112, a decoder 113, an alignment unit 115, a comparison unit 116, and a specifying unit 117.
 第1エンコーダー111、特徴変換部112、およびデコーダー113は、上述した、特定変化を再現するための学習があらかじめなされている。第1エンコーダー111、特徴変換部112、およびデコーダー113は、シミュレーション部10を構成する。 The first encoder 111, the feature conversion unit 112, and the decoder 113 are previously learned to reproduce the specific change. The first encoder 111, the feature conversion unit 112, and the decoder 113 constitute the simulation unit 10.
 印刷物検査の検査対象は、用紙900に形成された検査対象画像513である。 The inspection target of the printed matter inspection is an inspection target image 513 formed on the paper 900.
 検査対象画像513を印刷物として出力させるための印刷ジョブに含まれる印刷データが画像制御部160によりラスタライズ処理されることで得られるビットマップ形式のRIPデータである画像データが、入力画像として第1エンコーダー111に入力される。
図7の例においては、入力画像503のコンテンツはコンテンツCである。
Image data that is RIP data in bitmap format obtained by rasterizing print data included in a print job for outputting the inspection target image 513 as a printed material is input to the first encoder. 111 is input.
In the example of FIG. 7, the content of the input image 503 is content C.
 第1エンコーダー111は、入力画像503からコンテンツCの特徴を抽出する。第1エンコーダー111は、抽出したコンテンツCの特徴とともに、特定変化の特徴を特徴変換部112へ出力する。第1エンコーダー111は、あらかじめ学習されることで、コンテンツCの特徴の抽出および出力、ならびに特定変化の特徴の出力が可能になっている。 The first encoder 111 extracts the feature of the content C from the input image 503. The first encoder 111 outputs the feature of the specific change to the feature conversion unit 112 together with the feature of the extracted content C. The first encoder 111 is able to extract and output the feature of the content C and output the feature of specific change by learning in advance.
 特徴変換部112は、入力画像503のコンテンツCの特徴に特定変化の特徴を加える変換を行う。 The feature conversion unit 112 performs conversion to add a feature of specific change to the feature of the content C of the input image 503.
 デコーダー113は、特徴変換部112による変換により得られた特徴から、コンテンツCに特定変化が加わった画像を、推定出力画像553として再現する。 The decoder 113 reproduces, as the estimated output image 553, an image in which the specific change is added to the content C from the features obtained by the conversion by the feature conversion unit 112.
 一方、入力画像503は、画像形成部170により用紙900に形成されることで検査対象画像(用紙上の画像)513となる。 On the other hand, the input image 503 is formed on the paper 900 by the image forming unit 170 to become an inspection target image (image on the paper) 513.
 検査対象画像513は、印刷物検査のために、画像読取部150により読取られることで出力画像(読取画像)523となる。 The inspection target image 513 becomes an output image (read image) 523 by being read by the image reading unit 150 for printed matter inspection.
 位置合せ部115は、たとえば製本時に使用されるいわゆる「トンボ」等マーカーを利用した公知の方法により、出力画像523と推定出力画像553とを位置合わせする。 The alignment unit 115 aligns the output image 523 and the estimated output image 553 by a known method using a marker such as a so-called “dragonfly” used at the time of bookbinding.
 比較部116は、位置合わせされた、出力画像523と基準画像である推定出力画像553とを比較する。比較部116は、たとえば、推定出力画像553と出力画像523とを、位置合わせされたことで対応する画素同士の明度、色相、および彩度の少なくともいずれかについて差分を算出することで比較してもよい。 The comparison unit 116 compares the aligned output image 523 with the estimated output image 553 that is the reference image. For example, the comparison unit 116 compares the estimated output image 553 and the output image 523 by calculating a difference with respect to at least one of brightness, hue, and saturation between corresponding pixels by being aligned. Also good.
 特定部117は、比較部116による比較結果に基づいて、検査対象画像513が形成された用紙(印刷物)900がヤレ紙かどうか判定する。これにより、特定部117は、ヤレ紙を特定する。特定部117は、比較部116により算出された差分があらかじめ設定された閾値を超えた画素があると判断した場合、当該画素を含む出力画像523が読み取られた用紙900をヤレ紙と判断し得る。当該閾値は、上述した画素同士の明度等の差分の大きさとヤレ紙と判断される場合との相関関係を実験等によりあらかじめ求めておき、当該相関関係に基づいて設定し得る。 The identifying unit 117 determines whether or not the paper (printed material) 900 on which the inspection target image 513 is formed is scraped paper based on the comparison result by the comparing unit 116. Thereby, the specifying unit 117 specifies the waste paper. When the specifying unit 117 determines that there is a pixel in which the difference calculated by the comparison unit 116 exceeds a preset threshold value, the specifying unit 117 can determine that the paper 900 on which the output image 523 including the pixel is read is a waste paper. . The threshold value can be set based on the correlation obtained in advance by experiment or the like to obtain a correlation between the magnitude of the difference such as brightness between the pixels described above and the case where the difference is determined to be paper.
 図8は、シミュレーション部に入力されるRIPデータの入力画像と推定出力画像の例を具体的に示した、印刷物検査時のシミュレーション部の機能を示すブロック図である。 FIG. 8 is a block diagram showing the functions of the simulation unit at the time of printed matter inspection, specifically showing an example of an input image and an estimated output image of RIP data input to the simulation unit.
 入力画像504のコンテンツは、たとえば、内部に色が付されていない円、および内部に濃度の異なる色が付された3つの円を含む4つの円のコンテンツCである。図7の例においては、シミュレーション部10を構成する、第1エンコーダー111、特徴変換部112、およびデコーダー113は、特定変化として、コンテンツを含む画像全体にグレーの色を再現するようにあらかじめ学習されている。 The content of the input image 504 is content C of four circles including, for example, a circle that is not colored inside and three circles that are colored differently inside. In the example of FIG. 7, the first encoder 111, the feature conversion unit 112, and the decoder 113 constituting the simulation unit 10 are learned in advance so as to reproduce the gray color in the entire image including the content as a specific change. ing.
 このため、コンテンツCの入力画像504が入力されると、シミュレーション部102は、内部に色が付されていない円、および内部に濃度の異なる色が付された3つの円を含む4つの円のコンテンツCを含む画像全体にグレーの色が特定変化として反映された推定出力画像554をシミュレーションして出力する。 For this reason, when the input image 504 of the content C is input, the simulation unit 102 calculates four circles including a circle that is not colored inside and three circles that are colored differently inside. An estimated output image 554 in which the gray color is reflected as a specific change in the entire image including the content C is simulated and output.
 制御部110の動作について説明する。 The operation of the control unit 110 will be described.
 図9は、印刷物検査装置の学習時の制御部110の動作を示すフローチャートである。本フローチャートは、制御部110により、プログラムにしたがい実行され得る。 FIG. 9 is a flowchart showing the operation of the control unit 110 during learning of the printed matter inspection apparatus. This flowchart can be executed by the control unit 110 according to a program.
 制御部110は、学習用入力画像500と、学習用出力画像600との組合せを学習用データとして取得する(S101)。制御部110は、あらかじめ記憶部120に記憶された学習用データを読み出すことで、学習用データを取得し得る。 The control unit 110 acquires a combination of the learning input image 500 and the learning output image 600 as learning data (S101). The control unit 110 can acquire the learning data by reading the learning data stored in the storage unit 120 in advance.
 制御部110は、第1エンコーダー111により、学習用入力画像500のコンテンツの特徴と、学習用出力画像600に含まれる特定変化の特徴を、学習用データから抽出する(S102)。 The control unit 110 uses the first encoder 111 to extract the feature of the content of the learning input image 500 and the feature of the specific change included in the learning output image 600 from the learning data (S102).
 制御部110は、ステップS102において抽出したコンテンツの特徴に特定変化の特徴を加える変換を特徴変換部112により実行する(S103)。 The control unit 110 causes the feature conversion unit 112 to perform conversion for adding the specific change feature to the content feature extracted in step S102 (S103).
 制御部110は、ステップS104における変換により得られた特徴から、推定出力画像550を、デコーダー113により再現する(S104)。 The control unit 110 reproduces the estimated output image 550 by the decoder 113 from the characteristics obtained by the conversion in step S104 (S104).
 制御部110は、ステップS104における変換により得られた特徴と、第2エンコーダー114により推定出力画像550から抽出した推定出力画像550の特徴の差異を第1ロスL1として算出する(S105)。 The control unit 110 calculates a difference between the feature obtained by the conversion in step S104 and the feature of the estimated output image 550 extracted from the estimated output image 550 by the second encoder 114 as the first loss L1 (S105).
 制御部110は、ステップS101で取得した学習用出力画像600から特定変化の特徴を第1特徴として抽出する(S106)。 The control unit 110 extracts the feature of the specific change from the learning output image 600 acquired in step S101 as the first feature (S106).
 制御部110は、推定出力画像550から特定変化の特徴を第2特徴として抽出する(S107)。 The control unit 110 extracts the feature of the specific change from the estimated output image 550 as the second feature (S107).
 制御部110は、第1特徴と第2特徴との差異を第2ロスとして算出する(S108)。 The control unit 110 calculates a difference between the first feature and the second feature as a second loss (S108).
 制御部110は、第1ロスL1と第2ロスL2の和が最も小さくなるように、シミュレーション部10および第2エンコーダー114を学習する(S109)。 The control unit 110 learns the simulation unit 10 and the second encoder 114 so that the sum of the first loss L1 and the second loss L2 is minimized (S109).
 図10は、印刷物検査装置の印刷物検査時の制御部の動作を示すフローチャートである。本フローチャートは、制御部110により、プログラムにしたがい実行され得る。 FIG. 10 is a flowchart showing the operation of the control unit during printed matter inspection of the printed matter inspection apparatus. This flowchart can be executed by the control unit 110 according to a program.
 制御部110は、学習済のシミュレーション部10により、RIPデータの入力画像503から、推定出力画像553をシミュレーションする(S201)。 The control unit 110 simulates the estimated output image 553 from the input image 503 of the RIP data by the learned simulation unit 10 (S201).
 制御部110は、入力画像503に基づいて、画像形成部170により用紙900に検査対象画像513を形成し、形成された検査対象画像513を画像読取部150により読み取ることで出力画像523を取得する(S202)。 Based on the input image 503, the control unit 110 forms an inspection target image 513 on the paper 900 by the image forming unit 170, and acquires the output image 523 by reading the formed inspection target image 513 by the image reading unit 150. (S202).
 制御部110は、比較部116により、推定出力画像553と出力画像523とを比較する(S203)。 The control unit 110 uses the comparison unit 116 to compare the estimated output image 553 and the output image 523 (S203).
 制御部110は、特定部117により、比較部116による比較結果に基づいて、推定出力画像553と出力画像523の差分が閾値を超えたかどうか判断する(S204)。 The control unit 110 determines whether the difference between the estimated output image 553 and the output image 523 exceeds the threshold value based on the comparison result by the comparison unit 116 by the specifying unit 117 (S204).
 制御部110は、推定出力画像553と出力画像523の差分が閾値を超えたと判断したときは(S204:YES)、出力画像523が読み取られた用紙900をヤレ紙として特定する(S205)。制御部110は、推定出力画像553と出力画像523の差分が閾値を超えていないと判断したときは(S204:NO)、処理を終了する。 When the controller 110 determines that the difference between the estimated output image 553 and the output image 523 exceeds the threshold (S204: YES), the control unit 110 identifies the paper 900 on which the output image 523 has been read as a waste paper (S205). When the control unit 110 determines that the difference between the estimated output image 553 and the output image 523 does not exceed the threshold value (S204: NO), the process ends.
 (第2実施形態)
 第2実施形態について説明する。本実施形態と第1実施形態とで異なる点は、本実施形態は、制御部110の機能として位置合せ部115を有しない点である。それ以外の点は、第1実施形態と同様であるため、重複となる説明は省略する。
(Second Embodiment)
A second embodiment will be described. The difference between the present embodiment and the first embodiment is that the present embodiment does not have the alignment unit 115 as a function of the control unit 110. Since the other points are the same as those in the first embodiment, redundant description is omitted.
 図11は、第2実施形態に係る印刷物検査装置の印刷物検査時の制御部の機能を示すブロック図である。 FIG. 11 is a block diagram illustrating functions of a control unit during printed matter inspection of the printed matter inspection apparatus according to the second embodiment.
 本実施形態においては、位置合せ部115を有しないため、推定出力画像553と出力画像523との位置合わせを行わない。これにより、単一色のページが印刷される場合等、推定出力画像553と出力画像523との位置合わせを行う必要性が低い場合に、位置合わせの処理が省略されることで、印刷物検査における演算量を抑制できる。 In the present embodiment, since the alignment unit 115 is not provided, the estimated output image 553 and the output image 523 are not aligned. As a result, when it is less necessary to align the estimated output image 553 and the output image 523, such as when a single-color page is printed, the alignment processing is omitted, so that the calculation in the printed matter inspection is performed. The amount can be suppressed.
 (第3実施形態)
 第3実施形態について説明する。本実施形態と第1実施形態とで異なる点は次の点である。すなわち、本実施形態は、シミュレーション部10が、異なる学習済モデルの複数のニューラルネットワーク21~25を有し、出力画像との差異が最も小さい推定出力画像をシミュレーションしたニューラルネットワーク21~25を決定する。そして、印刷物検査時に、決定したニューラルネットワーク21~25により、入力画像503から推定出力画像553をシミュレーションする点である。それ以外の点は、第1実施形態と同様であるため、重複となる説明は省略する。
(Third embodiment)
A third embodiment will be described. The difference between the present embodiment and the first embodiment is as follows. That is, in the present embodiment, the simulation unit 10 has a plurality of neural networks 21 to 25 having different learned models, and determines the neural networks 21 to 25 that simulate the estimated output image having the smallest difference from the output image. . Then, the estimated output image 553 is simulated from the input image 503 by the determined neural networks 21 to 25 at the time of printed matter inspection. Since the other points are the same as those in the first embodiment, redundant description is omitted.
 図12は、第3実施形態の印刷物検査装置の印刷物検査時の制御部の機能を示すブロック図である。 FIG. 12 is a block diagram illustrating functions of a control unit during printed matter inspection of the printed matter inspection apparatus according to the third embodiment.
 シミュレーション部10は、複数のニューラルネットワーク21~25を有する。これらのニューラルネットワーク21~25は、それぞれ、第1実施形態における第1エンコーダー111、特徴変換部112、およびデコーダー113を含むニューラルネットワークに対応する。 The simulation unit 10 has a plurality of neural networks 21 to 25. Each of these neural networks 21 to 25 corresponds to the neural network including the first encoder 111, the feature converter 112, and the decoder 113 in the first embodiment.
 第1ニューラルネットワーク21は、特定の画質Aの劣化を検出するための学習がされた学習済モデルのニューラルネットワークである。第2ニューラルネットワーク22は、特定の画質Bの劣化を検出するための学習がされた学習済モデルのニューラルネットワークである。第3ニューラルネットワーク23は、特定の画質Cの劣化を検出するための学習がされた学習済モデルのニューラルネットワークである。画質A、画質B、および画質Cはそれぞれ異なる画質であり、たとえば、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラの少なくともいずれかであり得る。 The first neural network 21 is a learned model neural network that has been trained to detect deterioration of a specific image quality A. The second neural network 22 is a learned model neural network that has been trained to detect deterioration of a specific image quality B. The third neural network 23 is a learned model neural network that has been trained to detect deterioration in specific image quality C. The image quality A, the image quality B, and the image quality C are different image quality, and may be, for example, at least one of gradation, color reproducibility, sharpness, noise, density, and uneven light distribution.
 第4ニューラルネットワーク24は、たとえば、画像形成部170の使用時間ごとの特定変化を再現するために学習された学習済モデルのうち現在の使用時間に対応する学習済モデルのニューラルネットワークである。すなわち、現在までの画像形成部170の使用時間に対応する特定変化を再現するために学習された学習済モデルのニューラルネットワークである。現在までの使用時間は、たとえば、印刷枚数の累計値を記憶部120に記憶しておき、当該累計値に基づいて算出され得る。第5ニューラルネットワーク25は、たとえば、画像読取部150の使用時間ごとの特定変化を再現するために学習された学習済モデルのうち現在の使用時間に対応する学習済モデルのニューラルネットワークである。すなわち、現在までの画像読取部150の使用時間に対応する特定変化を再現するために学習された学習済モデルのニューラルネットワークである。現在までの使用時間は、たとえば、読取枚数の累計値を記憶部120に記憶しておき、当該累計値に基づいて算出され得る。 The fourth neural network 24 is, for example, a neural network of a learned model corresponding to the current usage time among the learned models learned to reproduce the specific change for each usage time of the image forming unit 170. That is, this is a neural network of a learned model that has been learned to reproduce a specific change corresponding to the usage time of the image forming unit 170 up to now. The usage time up to the present time can be calculated based on, for example, a cumulative value of the number of printed sheets stored in the storage unit 120. For example, the fifth neural network 25 is a neural network of a learned model corresponding to the current usage time among the learned models learned to reproduce a specific change for each usage time of the image reading unit 150. That is, this is a neural network of a learned model learned to reproduce a specific change corresponding to the usage time of the image reading unit 150 up to now. The usage time up to the present time can be calculated based on, for example, a cumulative value of the number of read sheets stored in the storage unit 120.
 決定部118は、各ニューラルネットワーク21~25によるシミュレーションにより得られた複数の推定出力画像553を取得する。決定部118は、取得した複数の推定出力画像553のうち、共通の入力画像503に基づいて画像形成部170により用紙900に形成された画像が画像読取部150により読み取られることで得られた出力画像との差異が最も小さい推定出力画像553特定する。決定部118は、特定した推定出力画像553をシミュレーションしたニューラルネットワーク21~25を決定する。 The determination unit 118 acquires a plurality of estimated output images 553 obtained by simulation with the neural networks 21 to 25. The determination unit 118 outputs the image obtained by reading the image formed on the paper 900 by the image forming unit 170 based on the common input image 503 among the plurality of acquired estimated output images 553. The estimated output image 553 having the smallest difference from the image is specified. The determination unit 118 determines the neural networks 21 to 25 that simulate the specified estimated output image 553.
 シミュレーション部10は、決定されたニューラルネットワーク21~25により、印刷物検査時に、入力画像503から推定出力画像553をシミュレーションする。 The simulation unit 10 uses the determined neural networks 21 to 25 to simulate the estimated output image 553 from the input image 503 at the time of printed matter inspection.
 上述した実施形態は、以下の効果を奏する。 The embodiment described above has the following effects.
 記録媒体への画像の形成および画像の読み取りによる特定変化を再現するようにあらかじめ学習された学習済モデルのニューラルネットワークにより、RIPデータの入力画像から、特定変化後の画像をシミュレーションする。そして、シミュレーションにより得られた画像と、検査対象である印刷物に形成された画像が読み取られることで得られた画像とを比較し、比較結果に基づいてヤレ紙を特定する。これにより、作業量を増大させることなく、ヤレ紙の検出精度を向上できる。 The image after the specific change is simulated from the input image of the RIP data by the neural network of the learned model that has been learned in advance so as to reproduce the specific change due to the formation of the image on the recording medium and the reading of the image. Then, the image obtained by the simulation and the image obtained by reading the image formed on the printed material to be inspected are compared, and the waste paper is specified based on the comparison result. Thereby, the detection accuracy of the waste paper can be improved without increasing the work amount.
 さらに、ニューラルネットワークを、RIPデータの学習用入力画像と、当該学習用入力画像に特定変化が加えられた学習用出力画像と、の組合せの学習用データが入力されたときに、学習用出力画像に加えられた特定変化が、学習用入力画像に対して再現された推定出力画像を出力するようにあらかじめ学習させる。これにより、ヤレ紙の検出精度をさらに向上できる。 Further, when the learning data of a combination of the learning input image of the RIP data and the learning output image obtained by adding a specific change to the learning input image is input to the neural network, the learning output image The specific change applied to is learned in advance so as to output an estimated output image reproduced for the learning input image. Thereby, the detection accuracy of the waste paper can be further improved.
 さらに、特定変化を、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラの少なくともいずれかの画質の変化とする。そして、上記ニューラルネットワークを、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラのいずれかを再現するためのチャートの学習用入力画像と、当該チャートの学習用入力画像に上記画像の変化が加えられた学習用出力画像と、の組合せを用いてあらかじめ学習された学習済モデルのニューラルネットワークとする。これにより、より効率的にヤレ紙の検出精度を向上できる。 Furthermore, the specific change is a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and light distribution unevenness. Then, the neural network is applied to a learning input image for a chart for reproducing any one of gradation, color reproducibility, sharpness, noise, density, and uneven light distribution, and a learning input image for the chart. A neural network of a learned model that has been learned in advance using a combination of the output image for learning to which the change of the image has been added is used. Thereby, the detection accuracy of the waste paper can be improved more efficiently.
 さらに、上記ニューラルネットワークを、文字のチャートの学習用入力画像と、文字のチャートの学習用入力画像に画像の変化が加えられた学習用出力画像と、の組合せを用いてあらかじめ学習された学習済モデルのニューラルネットワークとする。これにより、文字を含む印刷物に関するヤレ紙検出精度を効果的に向上できる。 Further, the neural network is trained in advance using a combination of a learning input image of a character chart and a learning output image in which an image change is added to the learning input image of the character chart. The model is a neural network. As a result, it is possible to effectively improve the precision of detecting the waste paper regarding the printed matter including characters.
 さらに、シミュレーション部が、異なる学習済モデルの複数のニューラルネットワークを有するものとし、出力画像との差異が最も小さい推定出力画像をシミュレーションしたニューラルネットワークを決定する。そして、決定したニューラルネットワークにより、入力画像から推定出力画像をシミュレーションする。これにより、現在の装置の劣化度や、特定の画質の劣化の観点から学習された学習済モデルのうち、推定出力画像のシミュレーション精度の最も高い学習済モデルのニューラルネットワークにより推定出力画像をシミュレーションできるため、ヤレ紙検出精度をさらに効果的に向上できる。 Further, the simulation unit has a plurality of neural networks of different learned models, and determines a neural network that simulates an estimated output image having the smallest difference from the output image. Then, the estimated output image is simulated from the input image by the determined neural network. As a result, the estimated output image can be simulated by the neural network of the learned model having the highest simulation accuracy of the estimated output image among the learned models learned from the viewpoint of the degree of deterioration of the current device and the deterioration of the specific image quality. Therefore, the accuracy of detecting the waste paper can be further effectively improved.
 本発明は、上述した実施形態に限定されない。 The present invention is not limited to the above-described embodiment.
 たとえば、実施形態においては、シミュレーション部と第2エンコーダーとを用いたニューラルネットワークによる学習によって特定変化をシミュレーションするための学習をし、学習後のシミュレーション部により特定変化をシミュレーションしている。しかし、他の構造のニューラルネットワークにより、特定変化をシミュレーションするための学習をし、学習後のシミュレーション部により特定変化をシミュレーションしてもよい。また、ニューラルネットワークを用いない機械学習により、特定変化をシミュレーションするための学習をし、学習後のシミュレーション部により特定変化をシミュレーションしてもよい。 For example, in the embodiment, learning for simulating a specific change is performed by learning with a neural network using a simulation unit and a second encoder, and the specific change is simulated by a simulation unit after learning. However, learning for simulating a specific change may be performed using a neural network having another structure, and the specific change may be simulated by a simulation unit after learning. Further, learning for simulating a specific change may be performed by machine learning without using a neural network, and the specific change may be simulated by a simulation unit after learning.
 また、実施形態においては、ヤレ紙を特定するものとして説明したが、製品基準を満たす微細な異常を判断し、当該異常をもつ用紙を特定してもよい。 In the embodiment, the description has been made assuming that the paper is specified. However, a fine abnormality satisfying the product standard may be determined, and the paper having the abnormality may be specified.
 また、実施形態においては、記憶媒体として用紙を例に説明したが、記録媒体は用紙に限定されず、樹脂フィルム等であってもよい。 In the embodiment, the paper is described as an example of the storage medium. However, the recording medium is not limited to the paper, and may be a resin film or the like.
 また、実施形態においてプログラムにより実行される処理の一部または全部を回路などのハードウェアに置き換えて実行され得る。 Further, some or all of the processing executed by the program in the embodiment may be executed by replacing with hardware such as a circuit.
 本出願は、2018年4月5日に出願された日本特許出願(特願2018-073134号)に基づいており、その開示内容は、参照され、全体として、組み入れられている。 This application is based on a Japanese patent application (Japanese Patent Application No. 2018-073134) filed on April 5, 2018, the disclosure of which is referred to and incorporated as a whole.
L1 第1ロス、
L2 第2ロス、
10 シミュレーション部、
40 作像部、
50 定着部、
60 給紙部、
70 用紙搬送部、
100 画像形成装置、
110 制御部、
111 第1エンコーダー、
112 特徴変換部、
113 デコーダー、
114 第2エンコーダー、
115 位置合せ部、
116 比較部、
117 特定部、
118 決定、
120 記憶部、
130 通信部、
140 操作表示部、
150 画像読取部、
160 画像制御部、
170 画像形成部、
500、501、502 学習用入力画像、
503、504 入力画像、
513 検査対象画像、
523 出力画像、
550、551、552、553、554 推定出力画像、
600、601、602 学習量出力画像、
900 用紙。
L1 first loss,
L2 2nd loss,
10 Simulation part,
40 Image creation part,
50 fixing section,
60 paper feeder,
70 Paper transport section,
100 image forming apparatus,
110 control unit,
111 first encoder,
112 feature converter,
113 decoder,
114 Second encoder,
115 alignment section,
116 comparison section,
117 specific part,
118 decision,
120 storage unit,
130 communication unit,
140 operation display section,
150 image reading unit,
160 image control unit,
170 Image forming unit,
500, 501, 502 Learning input image,
503, 504 input image,
513 image to be inspected,
523 output image,
550, 551, 552, 553, 554 estimated output image,
600, 601, 602 Learning amount output image,
900 paper.

Claims (11)

  1.  画像形成装置による記録媒体への画像の形成と、形成された前記画像の読取装置による読取りと、による前記画像の変化を再現するようにあらかじめ学習された学習済モデルのニューラルネットワークにより、RIPデータの入力画像から、前記変化後のRIPデータの画像を、推定出力画像としてシミュレーションするシミュレーション部と、
     前記入力画像に基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた出力画像と、前記推定出力画像とを比較する比較部と、
     前記比較部による比較結果に基づいて、異常の記憶媒体を特定する特定部と、
     を有する印刷物検査装置。
    By using a neural network of a learned model that has been learned in advance so as to reproduce the change in the image due to the formation of the image on the recording medium by the image forming apparatus and the reading of the formed image by the reading apparatus, the RIP data A simulation unit that simulates an image of the changed RIP data as an estimated output image from an input image;
    A comparison unit that compares an output image obtained by reading an image formed on a recording medium by the image forming apparatus based on the input image with the reading apparatus and the estimated output image;
    A specifying unit for specifying an abnormal storage medium based on a comparison result by the comparing unit;
    A printed matter inspection apparatus.
  2.  前記ニューラルネットワークは、RIPデータの学習用入力画像と、前記学習用入力画像に前記画像の変化が加えられた学習用出力画像と、の組合せの学習用データが入力されたときに、前記学習用出力画像に加えられた前記画像の変化が、前記学習用入力画像に対して再現された前記推定出力画像を出力するようにあらかじめ学習された学習済モデルのニューラルネットワークである、請求項1に記載の印刷物検査装置。 The neural network receives the learning data when a combination of the learning input image of RIP data and the learning output image in which the change of the image is added to the learning input image is input. The change in the image applied to the output image is a neural network of a learned model that has been learned in advance so as to output the estimated output image reproduced with respect to the learning input image. Printed matter inspection equipment.
  3.  前記画像の変化は、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラの少なくともいずれかの画質の変化であり、
     前記ニューラルネットワークは、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラのいずれかを再現するためのチャートの前記学習用入力画像と、前記チャートの前記学習用入力画像に前記画像の変化が加えられた前記学習用出力画像と、の組合せを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、請求項2に記載の印刷物検査装置。
    The change in the image is a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and uneven light distribution,
    The neural network includes the learning input image of a chart for reproducing any of gradation, color reproducibility, sharpness, noise, density, and uneven light distribution, and the learning input image of the chart. The printed matter inspection apparatus according to claim 2, which is a neural network of a learned model that has been learned in advance using a combination with the learning output image to which the change of the image has been added.
  4.  前記ニューラルネットワークは、文字のチャートの前記学習用入力画像と、前記文字のチャートの前記学習用入力画像に前記画像の変化が加えられた前記学習用出力画像と、の組合せを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、請求項3に記載の印刷物検査装置。 The neural network is learned in advance using a combination of the learning input image of the character chart and the learning output image obtained by changing the image to the learning input image of the character chart. The printed matter inspection apparatus according to claim 3, which is a neural network of a learned model.
  5.  前記シミュレーション部は、特定の画質の劣化を検出するための前記学習済モデル、ならびに、前記画像形成装置および前記読取装置の使用時間ごとの前記変化をそれぞれ再現するための学習済モデルのうち現在の使用時間に対応する学習済モデル、の複数のニューラルネットワークにより、共通の前記入力画像から前記推定出力画像をそれぞれシミュレーションし、
     前記シミュレーション部によるシミュレーションにより得られた複数の前記推定出力画像のうち、前記共通の前記入力画像に基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた前記出力画像との差異が最も小さい前記推定出力画像を特定し、特定した前記推定出力画像をシミュレーションした前記学習済モデルのニューラルネットワークを決定する決定部をさらに有し、
     前記シミュレーション部は、前記決定部により決定された前記学習済モデルのニューラルネットワークにより、前記入力画像から前記推定出力画像をシミュレーションする、請求項1に記載の印刷物検査装置。
    The simulation unit includes the learned model for detecting deterioration of a specific image quality, and the learned model for reproducing the change for each usage time of the image forming apparatus and the reading apparatus. Each of the estimated output images from the common input image is simulated by a plurality of neural networks of a learned model corresponding to usage time,
    Of the plurality of estimated output images obtained by the simulation by the simulation unit, an image formed on the recording medium by the image forming apparatus based on the common input image is obtained by the reading device. Further comprising: a determining unit that identifies the estimated output image that has the smallest difference from the output image that has been determined, and that determines a neural network of the learned model that simulates the identified estimated output image;
    The printed matter inspection apparatus according to claim 1, wherein the simulation unit simulates the estimated output image from the input image by a neural network of the learned model determined by the determination unit.
  6.  画像形成装置による記録媒体への画像の形成と、形成された前記画像の読取装置による読み取りとによる前記画像の変化を、前記変化を再現するようにあらかじめ学習された学習済モデルのニューラルネットワークにより、RIPデータの入力画像から、前記変化後のRIPデータの出力画像を推定出力画像としてシミュレーションする段階(a)と、
     前記入力画像に基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた前記出力画像と、前記推定出力画像とを比較する段階(b)と、
     前記段階(b)における比較結果に基づいて、異常の記録媒体を特定する段階(c)と、
     を有する印刷物検査方法。
    By the neural network of the learned model that has been learned in advance to reproduce the change, the change of the image due to the formation of the image on the recording medium by the image forming device and the reading of the formed image by the reading device, (A) simulating an output image of the RIP data after the change as an estimated output image from an input image of the RIP data;
    (B) comparing the output image obtained by reading the image formed on the recording medium by the image forming apparatus based on the input image with the reading apparatus and the estimated output image;
    A step (c) of identifying an abnormal recording medium based on the comparison result in the step (b);
    A printed matter inspection method.
  7.  前記ニューラルネットワークは、前記入力画像と、前記入力画像に基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた前記出力画像との組合せの教師データを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、請求項6に記載の印刷物検査方法。 The neural network is a teacher data of a combination of the input image and the output image obtained by reading the image formed on the recording medium by the image forming device based on the input image by the reading device. The printed matter inspection method according to claim 6, wherein the printed network is a neural network of a learned model that has been learned in advance.
  8.  前記変化は、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラの少なくともいずれかの、画質の変化であり、
     前記ニューラルネットワークは、階調性、色再現性、先鋭性、ノイズ、濃度、および配光ムラをそれぞれ再現するためのチャートの前記入力画像と、前記チャートの前記入力画像に対応する前記出力画像との組合せの教師データを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、請求項7に記載の印刷物検査方法。
    The change is a change in image quality of at least one of gradation, color reproducibility, sharpness, noise, density, and uneven light distribution,
    The neural network includes the input image of a chart for reproducing gradation, color reproducibility, sharpness, noise, density, and uneven light distribution, and the output image corresponding to the input image of the chart. The printed matter inspection method according to claim 7, which is a neural network of a learned model that has been learned in advance using teacher data of a combination of
  9.  前記ニューラルネットワークは、文字のチャートの前記入力画像と、前記文字のチャートの前記入力画像に対応する前記出力画像との組合せの教師データを用いてあらかじめ学習された学習済モデルのニューラルネットワークである、請求項8に記載の印刷物検査方法。 The neural network is a neural network of a learned model previously learned using teacher data of a combination of the input image of the character chart and the output image corresponding to the input image of the character chart. The printed matter inspection method according to claim 8.
  10.  前記段階(a)は、特定の画質の劣化を検出するための前記学習済モデル、ならびに前記画像形成装置および前記読取装置の少なくともいずれかの使用時間ごとの前記変化をそれぞれ再現するための学習済モデル、のそれぞれのニューラルネットワークにより、共通の前記入力画像から前記推定出力画像をそれぞれシミュレーションし、
     前記段階(a)におけるシミュレーションにより得られた複数の前記推定出力画像のうち、前記共通の前記入力画像に対応する前記出力画像との差分が小さい前記推定出力画像を決定する段階(d)をさらに有し、
     前記段階(b)は、前記共通の前記入力画像基づいて前記画像形成装置により記録媒体に形成された画像が、前記読取装置により読み取られることで得られた前記出力画像と、前記決定部により決定された前記推定出力画像とを比較する、請求項6に記載の印刷物検査方法。
    In the step (a), the learned model for detecting deterioration of a specific image quality, and the learned for reproducing the change for each usage time of at least one of the image forming apparatus and the reading apparatus, respectively. Each of the models is simulated by the neural network of the model, and the estimated output image is simulated from the common input image, respectively.
    A step (d) of determining an estimated output image having a small difference from the output image corresponding to the common input image among the plurality of estimated output images obtained by the simulation in the step (a); Have
    The step (b) is determined by the determination unit and the output image obtained by reading the image formed on the recording medium by the image forming apparatus based on the common input image by the reading apparatus. The printed matter inspection method according to claim 6, wherein the estimated output image is compared.
  11.  請求項6~10のいずれかに記載の印刷物検査方法をコンピュータにより実行するための印刷物検査プログラム。
     
    11. A printed matter inspection program for executing the printed matter inspection method according to claim 6 by a computer.
PCT/JP2019/008636 2018-04-05 2019-03-05 Printed matter inspection device, printed matter inspection method, and printed matter inspection program WO2019193900A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-073134 2018-04-05
JP2018073134A JP2021107947A (en) 2018-04-05 2018-04-05 Printed matter inspection equipment, printed matter inspection method, and printed matter inspection program

Publications (1)

Publication Number Publication Date
WO2019193900A1 true WO2019193900A1 (en) 2019-10-10

Family

ID=68100191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/008636 WO2019193900A1 (en) 2018-04-05 2019-03-05 Printed matter inspection device, printed matter inspection method, and printed matter inspection program

Country Status (2)

Country Link
JP (1) JP2021107947A (en)
WO (1) WO2019193900A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021234509A1 (en) * 2020-05-17 2021-11-25 Landa Corporation Ltd. Detecting a defective nozzle in a digital printing system
JP2022012465A (en) * 2020-07-01 2022-01-17 株式会社東芝 Learning apparatus, method, program, and inference device
CN117533015A (en) * 2023-12-19 2024-02-09 广州市普理司科技有限公司 Digital printer flexible board sleeve position printing control system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7272857B2 (en) * 2019-05-10 2023-05-12 富士フイルム株式会社 Inspection method, inspection device, program and printing device
KR102574029B1 (en) * 2023-04-07 2023-09-05 주식회사 아이브 Detecting device for detecting double stroke through an artificial inteeligence learning model and a method using the same
CN116373477B (en) * 2023-06-06 2023-08-15 山东力乐新材料研究院有限公司 Fault prediction method and system based on printing equipment operation parameter analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05227338A (en) * 1992-02-12 1993-09-03 Ricoh Co Ltd Image forming device provided with learning function
JP2007172029A (en) * 2005-12-19 2007-07-05 Glory Ltd Print inspection device
JP2010165296A (en) * 2009-01-19 2010-07-29 Ricoh Co Ltd Image processing device, similarity calculation method, similarity calculation program and recording medium
JP2015178190A (en) * 2014-03-18 2015-10-08 株式会社リコー Image inspection device, image formation system and image inspection method
WO2016047377A1 (en) * 2014-09-22 2016-03-31 富士フイルム株式会社 Image-recording device, and image defect detection device and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05227338A (en) * 1992-02-12 1993-09-03 Ricoh Co Ltd Image forming device provided with learning function
JP2007172029A (en) * 2005-12-19 2007-07-05 Glory Ltd Print inspection device
JP2010165296A (en) * 2009-01-19 2010-07-29 Ricoh Co Ltd Image processing device, similarity calculation method, similarity calculation program and recording medium
JP2015178190A (en) * 2014-03-18 2015-10-08 株式会社リコー Image inspection device, image formation system and image inspection method
WO2016047377A1 (en) * 2014-09-22 2016-03-31 富士フイルム株式会社 Image-recording device, and image defect detection device and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021234509A1 (en) * 2020-05-17 2021-11-25 Landa Corporation Ltd. Detecting a defective nozzle in a digital printing system
JP2022012465A (en) * 2020-07-01 2022-01-17 株式会社東芝 Learning apparatus, method, program, and inference device
JP7419178B2 (en) 2020-07-01 2024-01-22 株式会社東芝 Learning devices, methods and programs
CN117533015A (en) * 2023-12-19 2024-02-09 广州市普理司科技有限公司 Digital printer flexible board sleeve position printing control system
CN117533015B (en) * 2023-12-19 2024-04-16 广州市普理司科技有限公司 Digital printer flexible board sleeve position printing control system

Also Published As

Publication number Publication date
JP2021107947A (en) 2021-07-29

Similar Documents

Publication Publication Date Title
WO2019193900A1 (en) Printed matter inspection device, printed matter inspection method, and printed matter inspection program
US10019792B2 (en) Examination device, examination method, and computer program product
US8848244B2 (en) Image inspection method, apparatus, control program overlapping inspection images to obtain positional shift
US10326916B2 (en) Inspection apparatus, inspection method and storage medium
US9065938B2 (en) Apparatus, system, and method of inspecting image, and recording medium storing image inspection control program
US20140036290A1 (en) Image inspection system and image inspection method
JP6357786B2 (en) Image inspection apparatus, image inspection system, and image inspection method
JP5962642B2 (en) Image forming apparatus and program
JP7206595B2 (en) Inspection device, inspection system, inspection method and program
US8970913B2 (en) Printing system and image forming apparatus and method that check a precision of a formed image
JP6171730B2 (en) Image inspection apparatus, image inspection method, and image inspection program
CN109116696A (en) Color lump layout determines
JP6464722B2 (en) Information processing apparatus, defect transmission method, and program
JP6705305B2 (en) Inspection device, inspection method and program
JP6635335B2 (en) Image inspection device and image forming system
JP2016035418A (en) Image processing device, output object inspection method, and program
JP6337541B2 (en) Image inspection apparatus, image forming system, and image inspection method
JP6848286B2 (en) Inspection equipment, inspection methods and programs
JP2019171726A (en) Image formation system, quality determination method and computer program
JP2016061603A (en) Projection device and projection method
JP6665544B2 (en) Inspection device, inspection system, inspection method and program
JP6277803B2 (en) Image inspection apparatus, image forming system, and image inspection program
JP2010262243A (en) Image-forming apparatus
JP2021184512A (en) Image forming apparatus
JP5906867B2 (en) Printing system and image forming apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19782326

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19782326

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP