WO2022138685A1 - 画像処理方法、画像処理装置 - Google Patents
画像処理方法、画像処理装置 Download PDFInfo
- Publication number
- WO2022138685A1 WO2022138685A1 PCT/JP2021/047457 JP2021047457W WO2022138685A1 WO 2022138685 A1 WO2022138685 A1 WO 2022138685A1 JP 2021047457 W JP2021047457 W JP 2021047457W WO 2022138685 A1 WO2022138685 A1 WO 2022138685A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- preprocessed
- singular
- feature
- test
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 142
- 238000003672 processing method Methods 0.000 title claims description 32
- 238000000034 method Methods 0.000 claims abstract description 316
- 230000008569 process Effects 0.000 claims abstract description 297
- 238000012360 testing method Methods 0.000 claims abstract description 149
- 230000007547 defect Effects 0.000 claims abstract description 90
- 238000007781 pre-processing Methods 0.000 claims abstract description 23
- 238000006243 chemical reaction Methods 0.000 claims abstract description 12
- 239000000284 extract Substances 0.000 claims abstract description 11
- 238000003909 pattern recognition Methods 0.000 claims description 48
- 230000000737 periodic effect Effects 0.000 claims description 36
- 239000013598 vector Substances 0.000 claims description 31
- 238000000605 extraction Methods 0.000 claims description 19
- 238000007906 compression Methods 0.000 claims description 13
- 230000006835 compression Effects 0.000 claims description 9
- 230000004931 aggregating effect Effects 0.000 claims description 2
- 230000008685 targeting Effects 0.000 claims 2
- 230000001360 synchronised effect Effects 0.000 claims 1
- 238000001914 filtration Methods 0.000 abstract description 3
- 238000012937 correction Methods 0.000 description 35
- 238000012546 transfer Methods 0.000 description 31
- 239000003086 colorant Substances 0.000 description 21
- 230000002093 peripheral effect Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000003860 storage Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000004140 cleaning Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 230000007812 deficiency Effects 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B41—PRINTING; LINING MACHINES; TYPEWRITERS; STAMPS
- B41J—TYPEWRITERS; SELECTIVE PRINTING MECHANISMS, i.e. MECHANISMS PRINTING OTHERWISE THAN FROM A FORME; CORRECTION OF TYPOGRAPHICAL ERRORS
- B41J29/00—Details of, or accessories for, typewriters or selective printing mechanisms not otherwise provided for
- B41J29/38—Drives, motors, controls or automatic cut-off devices for the entire printing mechanism
- B41J29/393—Devices for controlling or analysing the entire machine ; Controlling or analysing mechanical parameters involving printing of test patterns
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03G—ELECTROGRAPHY; ELECTROPHOTOGRAPHY; MAGNETOGRAPHY
- G03G15/00—Apparatus for electrographic processes using a charge pattern
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03G—ELECTROGRAPHY; ELECTROPHOTOGRAPHY; MAGNETOGRAPHY
- G03G15/00—Apparatus for electrographic processes using a charge pattern
- G03G15/01—Apparatus for electrographic processes using a charge pattern for producing multicoloured copies
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03G—ELECTROGRAPHY; ELECTROPHOTOGRAPHY; MAGNETOGRAPHY
- G03G21/00—Arrangements not provided for by groups G03G13/00 - G03G19/00, e.g. cleaning, elimination of residual charge
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/407—Control or modification of tonal gradation or of extreme levels, e.g. background level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10008—Still image; Photographic image from scanner, fax or copier
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30144—Printing quality
Definitions
- the present invention relates to an image processing method and an image processing apparatus for determining the cause of image defects based on a test image.
- An image forming apparatus such as a printer or a multifunction device executes a printing process for forming an image on a sheet.
- image defects such as vertical streaks, horizontal streaks, noise spots, and uneven density may occur in the image formed on the output sheet.
- the image forming apparatus is an apparatus that executes the printing process by an electrophotographic method
- various parts such as a photoconductor, a charging part, a developing part, and a transfer part can be considered as the cause of the image defect.
- skill is required to determine the cause of the image defect.
- the phenomenon of causing vertical streaks which is an example of the image defect
- the characteristic information such as the color, density, or the number of screen lines of the vertical streaks
- the phenomenon that causes the above-mentioned phenomenon is specified based on the information of the color, density or the number of screen lines of the image of the vertical streaks in the test image and the table data (see, for example, Patent Document 1).
- the table data is data in which a range of parameters such as image color, density, and screen line count is set by a threshold value for each type of phenomenon that causes the vertical streaks.
- the test image may include a plurality of types of the image defects.
- the image processing apparatus can extract the defective part of the image from the test image for each type by a simple process.
- An object of the present invention is to provide an image processing method and an image processing apparatus capable of extracting image defects occurring in an image forming apparatus from a test image for each type by simple processing.
- the image processing method is a method in which a processor determines an image defect in a test image obtained through an image reading process for a test image read from an output sheet of an image forming apparatus.
- the image processing method includes generating a first preprocessed image by the processor performing a first preprocessing including a main filtering process with the lateral direction of the test image as the processing direction.
- the main filter process is a process of converting the pixel values of the pixels of interest sequentially selected from the test image into conversion values.
- the converted value is obtained by a process of emphasizing the difference between the pixel value of the region of interest including the pixel of interest and the pixel value of two adjacent regions adjacent to both sides in a processing direction preset with respect to the region of interest.
- the image processing method includes generating a second preprocessed image by the processor performing a second preprocessing including the main filter processing in which the vertical direction of the test image is the processing direction.
- the processor includes a first singular portion and a second singular portion among singular portions composed of one or more significant pixels in the first preprocessed image and the second preprocessed image.
- the third singular part and the singular part extraction process for extracting each of the third singular part as the image defect are included.
- the first singular part exists in the first preprocessed image and is not common to the first preprocessed image and the second preprocessed image.
- the second singular part exists in the second preprocessed image and is not common to the first preprocessed image and the second preprocessed image.
- the third singular part is common to the first preprocessed image and the second preprocessed image.
- the image processing apparatus includes a processor that executes the processing of the image processing method.
- an image processing method and an image processing apparatus capable of extracting image defects occurring in an image forming apparatus from a test image for each type by simple processing.
- FIG. 1 is a configuration diagram of an image processing device according to an embodiment.
- FIG. 2 is a block diagram showing a configuration of a data processing unit in the image processing apparatus according to the embodiment.
- FIG. 3 is a flowchart showing an example of a procedure for image defect determination processing in the image processing apparatus according to the embodiment.
- FIG. 4 is a flowchart showing an example of a procedure for determining a peculiar defect in the image processing apparatus according to the embodiment.
- FIG. 5 is a flowchart showing an example of a procedure for determining density unevenness in the image processing apparatus according to the embodiment.
- FIG. 6 is a diagram showing an example of a test image including a singular portion and an example of a preprocessed image and a feature image generated based on the test image.
- FIG. 6 is a diagram showing an example of a test image including a singular portion and an example of a preprocessed image and a feature image generated based on the test image.
- FIG. 7 is a diagram showing an example of a region of interest and an adjacent region sequentially selected from a test image in the main filter processing by the image processing apparatus according to the embodiment.
- FIG. 8 is a diagram showing an example of a test image including periodic density unevenness and vertical waveform data derived based on the test image.
- FIG. 9 is a flowchart showing an example of the procedure of the feature image generation processing in the first application example of the image processing apparatus according to the embodiment.
- FIG. 10 is a flowchart showing an example of a procedure of feature image generation processing in the second application example of the image processing apparatus according to the embodiment.
- FIG. 11 is a flowchart showing an example of the procedure of the feature image generation processing in the third application example of the image processing apparatus according to the embodiment.
- FIG. 12 is a flowchart showing an example of the procedure of the feature image generation processing in the fourth application example of the image processing apparatus according to the embodiment.
- the image processing device 10 includes an image forming device 2 that executes a print process.
- the printing process is a process of forming an image on a sheet.
- the sheet is an image forming medium such as paper or a sheet-shaped resin member.
- the image processing device 10 also includes an image reading device 1 that executes a reading process for reading an image from a document.
- the image processing device 10 is a copying machine, a facsimile machine, a multifunction device, or the like.
- the image to be printed is an image read from the original by the image reading device 1 or an image represented by print data received from a host device (not shown).
- the host device is an information processing device such as a personal computer or a mobile information terminal.
- the image forming apparatus 2 may form the original test image g01 predetermined by the printing process on the sheet (see FIG. 6).
- the original test image g01 is an image that is the source of the test image g1 used for determining the presence / absence and cause of image defects in the image forming apparatus 2 (see FIG. 6).
- the test image g1 will be described later.
- the process including the reading process of the image reading device 1 and the printing process of the image forming apparatus 2 based on the image obtained by the reading process is a copy process.
- the image forming apparatus 2 includes a sheet transport mechanism 3 and a printing unit 4.
- the sheet transfer mechanism 3 includes a sheet delivery mechanism 31 and a plurality of sets of sheet transfer roller pairs 32.
- the sheet delivery mechanism 31 feeds the sheet from the sheet accommodating section 21 to the sheet transfer path 30.
- the plurality of sets of sheet transport roller pairs 32 transport the sheet along the sheet transport path 30, and discharge the sheet on which the image is formed to the discharge tray 22.
- the print unit 4 executes the print process on the sheet conveyed by the sheet transfer mechanism 3.
- the print unit 4 executes the print process in an electrophotographic manner.
- the printing unit 4 includes an image forming unit 4x, a laser scanning unit 4y, a transfer device 44, and a fixing device 46.
- the image forming unit 4x includes a drum-shaped photoconductor 41, a charging device 42, a developing device 43, and a drum cleaning device 45.
- the photoconductor 41 rotates, and the charging device 42 uniformly charges the surface of the photoconductor 41.
- the charging device 42 includes a charging roller 42a that rotates in contact with the surface of the photoconductor 41.
- the laser scanning unit 4y writes an electrostatic latent image on the surface of the photoconductor 41 charged by scanning the laser beam.
- the developing device 43 develops the electrostatic latent image into a toner image.
- the developing device 43 includes a developing roller 43a that supplies toner to the photoconductor 41.
- the transfer device 44 transfers the toner image on the surface of the photoconductor 41 to the sheet.
- the toner is an example of a granular developer.
- the fixing device 46 fixes the toner image on the sheet to the sheet by heating it.
- the fixing device 46 includes a fixing rotating body 46a that comes into contact with the sheet and rotates, and a fixing heater 46b that heats the fixing rotating body 46a.
- the image forming apparatus 2 shown in FIG. 1 is a tandem type color printing apparatus, and can execute the printing process of a color image. Therefore, the print unit 4 includes four image forming units 4x corresponding to toners of different colors.
- the transfer apparatus 44 includes four primary transfer rollers 441 corresponding to the four photoconductors 41, an intermediate transfer belt 440, a secondary transfer roller 442, and a belt cleaning device 443. including.
- the four image forming portions 4x form the toner images of cyan, magenta, yellow, and black on the surface of the photoconductor 41, respectively.
- Each of the primary transfer rollers 441 is also a part of each of the image forming portions 4x.
- the primary transfer roller 441 urges the intermediate transfer belt 440 to the surface of the photoconductor 41 while rotating.
- the primary transfer roller 441 transfers the toner image from the photoconductor 41 to the intermediate transfer belt 440.
- a color image composed of the toner images of four colors is formed on the intermediate transfer belt 440.
- the drum cleaning device 45 removes and recovers the toner remaining on the photoconductor 41 without being transferred to the intermediate transfer belt 440 from the photoconductor 41.
- the secondary transfer roller 442 transfers the toner images of the four colors on the intermediate transfer belt 440 to the sheet.
- the intermediate transfer belt 440 of the photoconductor 41 and the transfer apparatus 44 is an example of an image carrier that carries and rotates the toner image, respectively.
- the belt cleaning device 443 removes and recovers the toner remaining on the intermediate transfer belt 440 without being transferred to the sheet from the intermediate transfer belt 440.
- the image processing device 10 includes a data processing unit 8 and a human interface device 800 in addition to the image forming device 2 and the image reading device 1.
- the human interface device 800 includes an operation unit 801 and a display unit 802.
- the data processing unit 8 executes various data processes related to the print process or the read process, and further controls various electric devices.
- the operation unit 801 is a device that accepts user operations.
- the operation unit 801 includes one or both of a push button and a touch panel.
- the display unit 802 includes a display panel that displays information provided to the user.
- the data processing unit 8 includes a CPU (Central Processing Unit) 80, a RAM (Random Access Memory) 81, a secondary storage device 82, and a communication device 83.
- a CPU Central Processing Unit
- RAM Random Access Memory
- the CPU 80 can execute processing of received data of the communication device 83, various image processing, and control of the image forming device 2.
- the received data may include the print data.
- the CPU 80 is an example of a processor that executes data processing including the image processing.
- the CPU 80 may be realized by another type of processor such as a DSP (Digital Signal Processor).
- the communication device 83 is a communication interface device that communicates with another device such as the host device through a network such as a LAN (Local Area Network).
- the CPU 80 performs all transmission and reception of data to and from the other device through the communication device 83.
- the secondary storage device 82 is a non-volatile storage device that can be read by a computer.
- the secondary storage device 82 stores a computer program executed by the CPU 80 and various data referenced by the CPU 80.
- the flash memory and the hard disk drive are adopted as the secondary storage device 82.
- the RAM 81 is a computer-readable volatile storage device.
- the RAM 81 primarily stores the computer program executed by the CPU 80 and the data output and referenced in the process of executing the program by the CPU 80.
- the CPU 80 includes a plurality of processing modules realized by executing the computer program.
- the plurality of processing modules include a main control unit 8a, a job control unit 8b, and the like.
- a part or all of the plurality of processing modules may be realized by an independent processor other than the CPU 80 such as a DSP.
- the main control unit 8a executes a job selection process according to an operation on the operation unit 801, a process of displaying information on the display unit 802, a process of setting various data, and the like. Further, the main control unit 8a also executes a process of determining the content of the received data of the communication device 83.
- the job control unit 8b controls the image reading device 1 and the image forming device 2. For example, the job control unit 8b causes the image forming apparatus 2 to execute the print process based on the received data when the received data of the communication device 83 includes the print data.
- the job control unit 8b causes the image reading device 1 to execute the reading process and the image forming device 2 to obtain the reading process.
- the print process based on the image is executed.
- image defects such as vertical streaks Ps11, horizontal streaks Ps12, noise points Ps13, or density unevenness may occur in the image formed on the output sheet (see FIGS. 6 and 8).
- the image forming apparatus 2 executes the printing process by an electrophotographic method.
- the cause of the image defect is considered to be various parts such as the photoconductor 41, the charging device 42, the developing device 43, and the transfer device 44. Further, skill is required to determine the cause of the image defect.
- the image forming apparatus 2 executes a test print process for forming a predetermined original test image g01 on the sheet.
- test output sheet 9 the sheet on which the original test image g01 is formed is referred to as a test output sheet 9 (see FIG. 1).
- the main control unit 8a causes the display unit 802 to display a predetermined guidance message when the test print process is executed.
- This guidance message is a message urging the operation unit 801 to perform a reading start operation after setting the test output sheet 9 in the image reading device 1.
- the job control unit 8b causes the image reading device 1 to execute the reading process. ..
- the original test image g01 is read by the image reading device 1 from the test output sheet 9 output by the image forming apparatus 2, and a scanned image corresponding to the original test image g01 is obtained.
- the CPU 80 executes a process of determining the presence / absence and the cause of the image defect based on the read image or the test image g1 which is a compressed image of the read image (see FIG. 6). ..
- the CPU 80 is an example of a processor that executes processing of an image processing method for determining the presence / absence and cause of the image defect.
- the device that reads the original test image g01 from the test output sheet 9 may be, for example, a digital camera.
- the process in which the image reading device 1 or the digital camera reads the original test image g01 from the test output sheet 9 is an example of the image reading process for the test output sheet 9.
- the test image g1 may include a plurality of types of the image defects. In this case, it is preferable to determine the cause of the image defect in the test image for each type of the image defect in order to simplify the determination process and improve the determination accuracy.
- the image processing apparatus 10 can extract the defective image portion from the test image g1 for each type by a simple process.
- the CPU 80 executes an image defect determination process described later (see FIG. 3). Thereby, the CPU 80 can extract the image defect generated in the image forming apparatus 2 from the test image g1 for each type by a simple process.
- the cause of the image defect is determined by comparing the value of a specific image parameter such as the color, density or the number of screen lines of the image with a predetermined threshold value, a determination omission or an erroneous determination is made. It is easy to occur.
- the image pattern recognition process is suitable for classifying the input image into many events with high accuracy.
- the pattern recognition process is a process of determining which of the plurality of event candidates the input image corresponds to by a learning model trained using sample images corresponding to a plurality of event candidates as teacher data. And so on.
- the test image g1 including the image defect is used as the input image of the pattern recognition process, the calculation amount of the pattern recognition process becomes very large. Therefore, it is difficult to execute the pattern recognition process on a processor provided in a multifunction device or the like.
- the CPU 80 performs an image pattern recognition process in a manner capable of suppressing the amount of calculation. Thereby, the cause of the image defect can be determined with high accuracy.
- the image such as the test image g1 to be processed by the CPU 80 is digital image data.
- the digital image data constitutes map data including a plurality of pixel values corresponding to the two-dimensional coordinate regions of the main scanning direction D1 and the sub-scanning direction D2 intersecting the main scanning direction D1 for each of the three primary colors.
- the three primary colors are, for example, red, green and blue.
- the sub-scanning direction D2 is orthogonal to the main scanning direction D1.
- the main scanning direction D1 is the horizontal direction in the test image g1
- the sub-scanning direction D2 is the vertical direction in the test image g1.
- the original test image g01 and the test image g1 are mixed color halftone images in which a plurality of uniform monochromatic halftone images corresponding to a plurality of developed colors in the image forming apparatus 2 are combined.
- the plurality of monochromatic halftone images are images uniformly formed with a reference density of a predetermined intermediate gradation.
- the original test image g01 and the test image g1 are mixed color halftone images in which four uniform monochromatic halftone images corresponding to all the developed colors in the image forming apparatus 2 are combined.
- one test output sheet 9 including one original test image g01 is output. Therefore, one test image g1 corresponding to the original test image g01 is a specific target of the image defect.
- the plurality of processing modules in the CPU 80 have a feature image generation unit 8c, a singular unit identification unit 8d, a color vector identification unit 8e, a periodicity determination unit 8f, and a pattern recognition unit 8g. And a random unevenness determination unit 8h is further included (see FIG. 2).
- S101, S102, ... Represent an identification code of a plurality of steps in the image defect determination process.
- the main control unit 8a processes the process S101 in the image defect determination process. Is executed by the feature image generation unit 8c.
- step S101 the feature image generation unit 8c generates a test image g1 from the read image obtained by the image reading process for the test output sheet 9.
- the feature image generation unit 8c extracts the portion of the original image excluding the margin area of the outer edge from the read image as the test image g1.
- the feature image generation unit 8c generates a test image g1 by a compression process of compressing the portion of the original image excluding the margin area of the outer edge from the read image to a predetermined reference resolution.
- the feature image generation unit 8c compresses the scanned image when the resolution of the scanned image is higher than the reference resolution.
- the main control unit 8a shifts the process to the step S102.
- the peculiar defect determination process is a process for determining the presence or absence of the peculiar portion Ps1 such as the vertical streak Ps11, the horizontal streak Ps12, or the noise point Ps13 in the test image g1 and the cause of the occurrence of the peculiar portion Ps1 (see FIG. 6).
- the singular part Ps1 is an example of the image defect.
- main control unit 8a shifts the process to the step S103 when the peculiar defect determination process is completed.
- step S103 the periodicity determination unit 8f starts the density unevenness determination process described later. Further, the main control unit 8a shifts the process to the step S104 when the density unevenness determination process is completed.
- step S104 the main control unit 8a shifts the process to step S105 when it is determined that the image defect is caused by the process of step S102 or step S103, and shifts the process to step S106 when it is not. ..
- step S105 the main control unit 8a executes a defect handling process associated in advance with respect to the type and cause of the image defect determined to be generated by the process of step S102 or step S103.
- the defect handling process includes one or both of the first handling process and the second handling process shown below.
- the first correspondence process is a process of displaying a message prompting the replacement of the component causing the image defect on the display unit 802.
- the second corresponding process is a process of correcting an image forming parameter in order to eliminate or alleviate the image defect.
- the image forming parameter is a parameter relating to the control of the image forming unit 4x.
- the main control unit 8a ends the image defect determination process after executing the defect response process.
- step S106 the main control unit 8a executes the normal notification indicating that the image defect has not been specified, and then ends the image defect determination process.
- S201, S202, ... Represent the identification code of a plurality of steps in the peculiar defect determination process.
- the peculiar defect determination process is started from step S201.
- step S201 the feature image generation unit 8c generates a plurality of feature images g21, g22, g23 by executing a predetermined feature extraction process on the test image g1.
- Each of the feature images g21, g22, and g23 is an image obtained by extracting a predetermined specific type of singular portion Ps1 in the test image g1.
- the plurality of feature images g21, g22, g23 include the first feature image g21, the second feature image g22, and the third feature image g23 (see FIG. 6).
- the first feature image g21 is an image from which the vertical streaks Ps11 in the test image g1 are extracted.
- the second feature image g22 is an image from which the horizontal streaks Ps12 in the test image g1 are extracted.
- the third feature image g23 is an image from which the noise points Ps13 in the test image g1 are extracted.
- the feature extraction process includes a first preprocess, a second preprocess, and a singular part extraction process.
- the pixels sequentially selected from the test image g1 are referred to as the pixel of interest Px1 (see FIGS. 6 and 7).
- the feature image generation unit 8c generates the first preprocessed image g11 by executing the first preprocessing for the test image g1 with the main scanning direction D1 as the processing direction Dx1 (see FIG. 6).
- the feature image generation unit 8c generates the second preprocessed image g12 by executing the second preprocessing for the test image g1 with the sub-scanning direction D2 as the processing direction Dx1 (see FIG. 6).
- the feature image generation unit 8c generates three feature images g21, g22, g23 by executing the singular part extraction process for the first preprocessed image g11 and the second preprocessed image g12.
- the first pre-processing includes a main filter processing in which the main scanning direction D1 is the processing direction Dx1.
- the main filter processing the pixel value of the pixel of interest Px1 sequentially selected from the test image g1 is converted into a conversion value obtained by emphasizing the difference between the pixel value of the area of interest Ax1 and the pixel value of the two adjacent areas Ax2. This is the process of conversion (see FIGS. 6 and 7).
- the area of interest Ax1 is an area including the pixel of interest Px1, and the two adjacent areas Ax2 are areas adjacent to both sides in the processing direction Dx1 preset with respect to the area of interest Ax1.
- the region of interest Ax1 and the adjacent region Ax2 are regions containing one or more pixels, respectively.
- the sizes of the region of interest Ax1 and the adjacent region Ax2 are set according to the width of the vertical streaks Ps11 or the horizontal streaks Ps12 to be extracted, or the size of the noise points Ps13 to be extracted.
- the area of interest Ax1 and the adjacent area Ax2 each occupy the same range in the direction intersecting the processing direction Dx1.
- the area of interest Ax1 is an area for 21 pixels over 3 columns and 7 rows centered on the pixel of interest Px1.
- Each of the adjacent regions Ax2 is an region for 21 pixels over 3 columns and 7 rows.
- the number of rows is the number of lines along the processing direction Dx1
- the number of columns is the number of lines along the direction intersecting the processing direction Dx1.
- the size of each of the region of interest Ax1 and the adjacent region Ax2 is preset.
- each pixel value of the region of interest Ax1 is converted into a first correction value using a predetermined first correction coefficient K1, and each pixel value of each of the adjacent regions Ax2 is a predetermined first correction coefficient. 2 It is converted to the second correction value using the correction coefficient K2.
- the first correction coefficient K1 is a coefficient of 1 or more multiplied by each pixel value of the region of interest Ax1
- the second correction coefficient K2 is a coefficient less than 0 multiplied by each pixel value of the adjacent region Ax2. be.
- the total of the value obtained by multiplying the first correction coefficient K1 by the number of pixels of the region of interest Ax1 and the value obtained by multiplying the second correction coefficient K2 by the number of pixels of the two adjacent regions Ax2 is zero.
- the first correction coefficient K1 and the second correction coefficient K2 are set so as to be.
- the feature image generation unit 8c derives the first correction value corresponding to each pixel of the attention region Ax1 by multiplying each pixel value of the attention region Ax1 by the first correction coefficient K1 and derives the first correction value corresponding to each pixel of the attention region Ax1. By multiplying each pixel value by the second correction coefficient K2, the second correction value corresponding to each pixel of the two adjacent regions Ax2 is derived. Then, the feature image generation unit 8c derives a value obtained by integrating the first correction value and the second correction value as the conversion value of the pixel value of the pixel of interest Px1.
- the feature image generation unit 8c may include a total value or an average value of a plurality of the first correction values corresponding to a plurality of pixels in the region of interest Ax1 and a plurality of the first correction values corresponding to a plurality of pixels in the two adjacent regions Ax2.
- the converted value is derived by adding the total value or the average value of the two correction values.
- the absolute value of the converted value is a value obtained by amplifying the absolute value of the difference between the pixel value of the region of interest Ax1 and the pixel value of the two adjacent regions Ax2.
- the process of deriving the converted value by integrating the first correction value and the second correction value is an example of a process of emphasizing the difference between the pixel value of the region of interest Ax1 and the pixel value of the two adjacent regions Ax2.
- the first correction coefficient K1 is a negative number and the second correction coefficient K2 is a positive number.
- the feature image generation unit 8c may generate the first main map data including the plurality of integrated values obtained by the main filter processing in which the main scanning direction D1 is the processing direction Dx1 as the first preprocessed image g11. Conceivable.
- the first main map data is generated by the main filter processing in which the main scanning direction D1 is the processing direction Dx1.
- the test image g1 includes the horizontal streaks Ps12
- the first main map data from which the horizontal streaks Ps12 included in the test image g1 are removed is subjected to the main filter processing in which the main scanning direction D1 is the processing direction Dx1. Generated.
- the vertical streaks Ps11 correspond to the first singular part
- the horizontal streaks Ps12 correspond to the second singular part
- the noise point Ps13 corresponds to the third singular part.
- the second pre-processing includes the main filter processing in which the sub-scanning direction D2 is the processing direction Dx1.
- the feature image generation unit 8c may generate the second main map data including the plurality of integrated values obtained by the main filter processing in which the sub-scanning direction D2 is the processing direction Dx1 as the second pre-processed image g12. Conceivable.
- the second main map data is generated by the main filter processing in which the sub-scanning direction D2 is the processing direction Dx1.
- the test image g1 includes the vertical streaks Ps11
- the second main map data from which the vertical streaks Ps11 included in the test image g1 are removed is subjected to the main filter processing in which the sub-scanning direction D2 is the processing direction Dx1. Generated.
- the erroneous integrated value whose positive and negative are opposite to the integrated value representing the original state of the singular portion Ps1 is derived. It may end up. Such an erroneous integrated value may adversely affect the determination of image defects when processed as a pixel value representing the singular portion Ps1.
- the first pre-processing further includes an edge enhancement filter processing in which the main scanning direction D1 is the processing direction Dx1 in addition to the main filter processing in which the main scanning direction D1 is the processing direction Dx1.
- the second pre-processing further includes the edge enhancement filter processing in which the sub-scanning direction D2 is the processing direction Dx1 in addition to the main filter processing in which the sub-scanning direction D2 is the processing direction Dx1.
- the edge enhancement filter process is a process of performing edge enhancement for one of the region of interest Ax1 and the two adjacent regions Ax2, which is predetermined.
- the pixel value of the pixel of interest Px1 sequentially selected from the test image g1 is corrected by a positive or negative third correction coefficient K3 of the pixel value of the area of interest Ax1. It is a process of converting the value and the pixel value of one adjacent region Ax2 into an edge strength that integrates the fourth correction value corrected by the fourth correction coefficient K4 whose positive and negative are opposite to those of the third correction coefficient K3 (FIG. 6). reference).
- the third correction coefficient K3 is a positive coefficient and the fourth correction coefficient K4 is a negative coefficient.
- the third correction coefficient K3 and the fourth correction coefficient K4 are set.
- edge enhancement filter processing By executing the edge enhancement filter processing with the main scanning direction D1 as the processing direction Dx1, horizontal edge intensity map data in which each pixel value of the test image g1 is converted into the edge intensity is generated.
- edge enhancement filter processing is executed with the sub-scanning direction D2 as the processing direction Dx1, vertical edge intensity map data in which each pixel value of the test image g1 is converted into the edge intensity is generated.
- the feature image generation unit 8c generates the first main map data generated by the main filter processing in which the main scanning direction D1 is the processing direction Dx1.
- the feature image generation unit 8c generates the lateral edge intensity map data by executing the edge enhancement filter processing in which the main scanning direction D1 is the processing direction Dx1.
- the feature image generation unit 8c generates the first preprocessed image g11 by correcting each pixel value of the first main map data with each pixel value of the corresponding lateral edge intensity map data. For example, the feature image generation unit 8c generates the first preprocessed image g11 by adding the absolute value of each pixel value of the lateral edge intensity map data to each pixel value of the first main map data.
- the feature image generation unit 8c generates the second main map data by executing the main filter processing in which the sub-scanning direction D2 is the processing direction Dx1.
- the feature image generation unit 8c generates the vertical edge intensity map data by executing the edge enhancement filter processing in which the sub-scanning direction D2 is the processing direction Dx1.
- the feature image generation unit 8c generates the second preprocessed image g12 by correcting each pixel value of the second main map data with each pixel value of the corresponding vertical edge intensity map data. For example, the feature image generation unit 8c generates the second preprocessed image g12 by adding the absolute value of each pixel value of the vertical edge intensity map data to each pixel value of the second main map data.
- three feature images g21, g22, and g23 in which the vertical streaks Ps11, the horizontal streaks Ps12, and the noise points Ps13 contained in the first preprocessed image g11 or the second preprocessed image g12 are individually extracted, respectively, are performed. Is the process of generating.
- the three feature images g21, g22, and g23 are the first feature image g21, the second feature image g22, and the third feature image g23.
- the first feature image g21 is present in the first preprocessed image g11 among the peculiar portions Ps1 composed of one or more significant pixels in the first preprocessed image g11 and the second preprocessed image g12, and is the first preprocessed image. It is an image from which the singular part Ps1 which is not common to g11 and the second preprocessed image g12 is extracted.
- the first feature image g21 does not include the horizontal streaks Ps12 and the noise point Ps13, and includes the vertical streaks Ps11 when the first preprocessed image g11 includes the vertical streaks Ps11.
- the significant pixel is a pixel that can be distinguished from other pixels by comparing each pixel value in the test image g1 or an index value based on each pixel value with a predetermined threshold value.
- the second feature image g22 is present in the second preprocessed image g12 among the singular portions Ps1 in the first preprocessed image g11 and the second preprocessed image g12, and is present in the first preprocessed image g11 and the second preprocessed image g12. It is an image from which the singular part Ps1 which is not common is extracted.
- the second feature image g22 does not include the vertical streaks Ps11 and the noise point Ps13, and includes the horizontal streaks Ps12 when the second preprocessed image g12 contains the horizontal streaks Ps12.
- the third feature image g23 is an image from which the singular portion Ps1 common to the first preprocessed image g11 and the second preprocessed image g12 is extracted.
- the third feature image g23 does not include the vertical streaks Ps11 and the horizontal streaks Ps12, and includes the noise points Ps13 when the first preprocessed image g11 and the second preprocessed image g12 include the noise points Ps13.
- the feature image generation unit 8c has a first pixel value Xi, which is a pixel value exceeding a predetermined reference value in the first preprocessed image g11, and a pixel value exceeding the reference value in the second preprocessed image g12.
- the index value Zi is derived by applying the second pixel value Yi, which is the above, to the following equation (1).
- the subscript i is an identification number of the position of each pixel.
- the index value Zi of the pixels constituting the vertical streaks Ps11 is a relatively large positive number. Further, the index value Zi of the pixels constituting the horizontal stripes Ps12 is a relatively small negative number. Further, the index value Zi of the pixels constituting the noise point Ps13 is 0 or a value close to 0.
- the index value Zi is an example of an index value of the difference between the corresponding pixel values in the first preprocessed image g11 and the second preprocessed image g12.
- the above-mentioned property of the index value Zi is that the vertical streaks Ps11 are extracted from the first preprocessed image g11, the horizontal streaks Ps12 are extracted from the second preprocessed image g12, and the first preprocessed image g11 or the second preprocessed image g12. It can be used to simplify the process of extracting the noise point Ps13 from.
- the feature image generation unit 8c generates the first feature image g21 by converting the first pixel value Xi in the first preprocessed image g11 into the first specificity Pi derived by the following equation (2). .. As a result, the first feature image g21 in which the vertical streaks Ps11 are extracted from the first preprocessed image g11 is generated.
- the feature image generation unit 8c generates the second feature image g22 by converting the second pixel value Yi in the second preprocessed image g12 into the second specificity Qi derived by the following equation (3). .. As a result, the second feature image g22 in which the horizontal streaks Ps12 are extracted from the second preprocessed image g12 is generated.
- the feature image generation unit 8c generates the third feature image g23 by converting the first pixel value Xi in the first preprocessed image g11 into the third specificity Ri derived by the following equation (4). .. As a result, the third feature image g23 in which the noise point Ps13 is extracted from the first preprocessed image g11 is generated.
- the feature image generation unit 8c generates the third feature image g23 by converting the second pixel value Yi in the second preprocessed image g12 into the third specificity Ri derived by the following equation (5). You may. As a result, the third feature image g23 in which the noise point Ps13 is extracted from the second preprocessed image g12 is generated.
- the feature image generation unit 8c generates the first feature image g21 by a process of converting each pixel value of the first preprocessed image g11 by a predetermined equation (2) based on the index value Zi. do.
- Equation (2) is an example of the first conversion equation.
- the feature image generation unit 8c generates the second feature image g22 by a process of converting each pixel value of the second preprocessed image g12 by a predetermined equation (3) based on the index value Zi.
- Equation (3) is an example of the second conversion equation.
- the feature image generation unit 8c is subjected to a process of converting each pixel value of the first preprocessed image g11 or the second preprocessed image g12 by a predetermined equation (4) or (5) based on the index value Zi. 3 Feature image g23 is generated. Equations (4) and (5) are examples of the third conversion equation, respectively.
- the process of generating the first feature image g21, the second feature image g22, and the third feature image g23 in the step S201 is vertical among one or more singular portions Ps1 in the first preprocessed image g11 and the second preprocessed image g12. This is an example of processing for extracting the streaks Ps11, the horizontal streaks Ps12, and the noise points Ps13 as the image defects.
- the feature image generation unit 8c shifts the process to step S202 after the feature images g21, g22, and g23 are generated.
- step S202 the singular portion specifying portion 8d specifies the position of the singular portion Ps1 in each of the feature images g21, g22, and g23.
- the singular part specifying part 8d determines that the part having a pixel value outside the predetermined reference range in each of the feature images g21, g22, and g23 is the singular part Ps1.
- the singular portion specifying portion 8d is used when a plurality of singular portions Ps1 exist within predetermined proximity ranges in the main scanning direction D1 and the sub scanning direction D2 for each of the feature images g21, g22, and g23.
- a binding process for binding the plurality of singular portions Ps1 as a series of one singular portion Ps1 is executed.
- the singular portion specifying portion 8d includes the two vertical streaks Ps11 by the coupling process.
- the vertical streaks Ps11 are combined as one vertical streak Ps11.
- the singular portion specifying portion 8d has two of them by the coupling process.
- the horizontal streaks Ps12 of the above are combined as one horizontal streak Ps12.
- the third feature image g23 includes a plurality of noise points Ps13 arranged at intervals in the main scanning direction D1 or the sub-scanning direction D2 within the proximity range, the singular portion specifying portion 8d is subjected to the coupling process.
- the plurality of noise points Ps13 are combined as one noise point Ps13.
- the peculiar part specifying part 8d ends the peculiar defect determination process when the position of the peculiar part Ps1 is not specified in any of the three feature images g21, g22, and g23.
- the singular portion specifying portion 8d shifts the process to the step S203.
- step S203 the color vector specifying unit 8e specifies a color vector representing a vector in the color space from one of the colors of the singular part Ps1 in the test image g1 and the color of the reference region including the periphery of the singular part Ps1 to the other. ..
- the reference region is a region within a predetermined range determined with reference to the singular portion Ps1.
- the reference region is a region including a peripheral region adjacent to the singular portion Ps1 and not including the singular portion Ps1.
- the reference region may include a singular portion Ps1 and a peripheral region adjacent to the singular portion Ps1.
- the test image g1 is originally a uniform halftone image. Therefore, when a good test image g1 is formed on the test output sheet 9, the singular portion Ps1 is not specified, and the color vector at any position of the test image g1 is a substantially zero vector.
- the direction of the color vector between the singular portion Ps1 and the reference region corresponding to the singular portion Ps1 is any one of the four developed colors in the image forming apparatus 2. Represents an excess or deficiency of toner concentration.
- the direction of the color vector indicates which of the four image forming portions 4x in the image forming apparatus 2 is the cause of the occurrence of the singular portion Ps1.
- the color vector specifying unit 8e may specify the color of the singular part Ps1 in the test image g1 and the vector in the color space from one of the predetermined reference colors to the other as the color vector.
- the reference color is the original color of the test image g1.
- step S203 the color vector specifying unit 8e determines, based on the color vector, the developed color that is the cause of the singular portion Ps1 and the state of excess or deficiency of the density of the developed color.
- information of a plurality of unit vectors indicating the direction in which the density of cyan, magenta, yellow, or black is increased and the direction in which the density is insufficient with respect to the reference color of the test image g1 is previously stored in the secondary storage device 82. Is remembered in.
- the color vector specifying unit 8e normalizes the color vector to a predetermined unit length. Further, the color vector specifying unit 8e determines which of the plurality of unit vectors corresponding to the increase or decrease in the density of cyan, magenta, yellow or black is the closest to the normalized color vector. By doing so, it is determined whether the developed color causing the peculiar portion Ps1 and the state of excess or deficiency of the density of the developed color.
- the color vector specifying unit 8e shifts the process to the process S204 after executing the process of the process S203.
- step S204 the periodicity determination unit 8f shifts the process to step S205 when the singular portion Ps1 is specified in one or both of the second feature image g22 and the third feature image g23, and when it is not.
- the process is transferred to step S206.
- one or both of the second feature image g22 and the third feature image g23 in which the singular portion Ps1 is specified is referred to as a periodicity determination target image.
- the peculiar portion Ps1 in the periodicity determination target image is a horizontal streak Ps12 or a noise point Ps13 (see FIG. 6).
- the periodicity determination unit 8f executes a periodic specific unit determination process for the periodicity determination target image.
- the periodic singular part determination process includes a number determination process, a singular part periodicity determination process, and a singular part periodicity cause determination process.
- the number determination process is a process for determining the number of singular portions Ps1 lined up in the sub-scanning direction D2 in the periodicity determination target image.
- the periodicity determination unit 8f counts the number of horizontal streaks Ps12 in which the portion occupying the same range in the main scanning direction D1 exceeds a predetermined ratio in the second feature image g22 and is lined up in the sub-scanning direction D2. By doing so, the number of horizontal streaks Ps12 arranged in the sub-scanning direction D2 is determined.
- the periodicity determination unit 8f counts the number of noise points Ps13 whose positional deviation in the main scanning direction D1 is within a predetermined range in the third feature image g23, thereby counting the number of noise points Ps13 arranged in the sub-scanning direction D2. The number of noise points Ps13 arranged in the scanning direction D2 is determined.
- the periodicity determination unit 8f executes the singular part periodicity determination process only for the singular part Ps1 having two or more numbers arranged in the sub-scanning direction D2.
- the periodicity determination unit 8f determines that the singular portion Ps1 having one number arranged in the sub-scanning direction D2 does not have the periodicity, and determines the singular portion periodicity determination process and the singular portion periodicity cause determination process. To skip.
- the singular part periodicity determination process is a process for determining the presence or absence of one or more predetermined periodicities in the sub-scanning direction D2 for the periodicity determination target image.
- the periodicity corresponds to the outer peripheral length of the image-related rotating body such as the photoconductor 41, the charging roller 42a, the developing roller 43a, or the primary transfer roller 441 in each of the image forming portions 4x or in the transfer device 44. ..
- the state of the rotating body related to the image formation affects the quality of the image formed on the sheet.
- the image-related rotating body is referred to as an image-forming rotating body.
- the periodicity corresponding to the outer peripheral length of the image-forming rotating body has the sub-scanning direction D2 of the plurality of horizontal stripes Ps12 or the plurality of noise points Ps13. May appear as an interval.
- the periodicity determination target image has the periodicity corresponding to the outer peripheral length of the image-forming rotating body
- the image-forming rotating body corresponding to the periodicity has a horizontal streak in the periodicity determination target image. It can be said that this is the cause of Ps12 or noise point Ps13.
- the periodicity determination unit 8f executes an interval derivation process as the singularity determination process.
- the periodicity determination unit 8f derives the interval of the sub-scanning direction D2 of the two singular portions Ps1 as the period of the two singular portions Ps1.
- the periodicity determination unit 8f executes a frequency analysis process as the singularity determination process.
- the periodicity determination unit 8f performs frequency analysis such as Fourier transform on the periodicity determination target image including three or more singular portions Ps1 arranged in the sub-scanning direction D2 to determine the periodicity.
- the singular part frequency which is the dominant frequency in the frequency distribution of the data string of the singular part Ps1 in the target image, is specified.
- the periodicity determination unit 8f derives the period corresponding to the singular part frequency as the period of three or more singular parts Ps1.
- the periodicity determination unit 8f has a plurality of predetermined candidates for the image-forming rotating body, and the outer peripheral length of each candidate is set in advance between the period of the singular part Ps1. It is determined whether or not the specified periodic approximation condition is satisfied.
- the plurality of candidates for the image-forming rotating body in step S205 is an example of a plurality of predetermined cause candidates corresponding to the horizontal streaks Ps12 or the noise point Ps13.
- the periodicity determination unit 8f determines that one of the candidates for the image-forming rotating body determined to satisfy the periodic approximation condition in the singularity periodicity cause determination process is the cause of the generation of the periodic singularity. do. As a result, the cause of the horizontal streaks Ps12 or the noise point Ps13 is determined.
- the periodicity determination unit 8f is out of the four image forming units 4x having different development colors due to the cause of the horizontal streaks Ps12 or the noise point Ps13 based on the color vector determined in the step S203. Which of the above-mentioned image-forming rotating bodies is determined.
- the periodicity determining unit 8f when three or more singular portions Ps1 arranged in the sub-scanning direction D2 include the aperiodic singular portion that does not correspond to the singular portion frequency, the periodicity determining unit 8f is characterized by describing the aperiodic singular portion later. Target for pattern recognition processing.
- the periodicity determination unit 8f generates inverse Fourier transform data by performing an inverse Fourier transform on the frequency distribution obtained by the Fourier transform from which frequency components other than the singular part frequency have been removed.
- the periodicity determination unit 8f includes three or more singular portions Ps1 arranged in the sub-scanning direction D2, which are present at positions deviating from the peak position in the waveform of the sub-scanning direction D2 indicated by the inverse Fourier transform data. It is determined as the aperiodic singular part.
- the periodicity determination unit 8f determines that the second feature image g22 and the third feature image g23 do not include the aperiodic singular portion as a result of the process of step S205, the periodicity determination unit 8f performs the peculiar defect determination process. To finish.
- the process is shifted to the step S206.
- step S206 the pattern recognition unit 8g executes a feature pattern recognition process for each of the first feature image g21 and the second feature image g22 and the third feature image g23 including the aperiodic singular portion, respectively.
- the second feature image g22 including the aperiodic singular portion or the third feature image g23 including the aperiodic singular portion is an example of the aperiodic feature image.
- the first feature image g21 and the second feature image g22 and the third feature image g23 including the aperiodic singular portion are used as input images, respectively.
- the pattern recognition unit 8g determines which of a plurality of predetermined cause candidates corresponding to the image defect corresponds to the input image by pattern recognition of the input image.
- the input image of the feature pattern recognition process may include the horizontal edge intensity map data or the vertical edge intensity map data obtained by the edge enhancement filter process.
- the first feature image g21 and the horizontal edge intensity map data are used as the input image.
- the second feature image g22 and the vertical edge intensity map data are used as the input image.
- the third feature image g23 and one or both of the horizontal edge intensity map data and the vertical edge intensity map data are used as the input image. ..
- the feature pattern recognition process is a process of classifying the input image into one of the plurality of cause candidates by a learning model learned in advance using a plurality of sample images corresponding to the plurality of cause candidates as teacher data. be.
- the learning model is a model in which a classification type machine learning algorithm called a random forest is adopted, a model in which a machine learning algorithm called SVM (Support Vector Machine) is applied, or a CNN (Convolutional Neural Network). )
- SVM Small Vector Machine
- CNN Convolutional Neural Network
- the learning model is individually prepared for each of the first feature image g21 and the second feature image g22 and the third feature image g23 including the aperiodic singular portion, respectively. Further, the plurality of sample images are used as the teacher data for each of the cause candidates.
- the pattern recognition unit 8g has four image forming units having different development colors due to the causes of the vertical streaks Ps11, the horizontal streaks Ps12, or the noise points Ps13 based on the color vector determined in the step S203. It is determined which part of 4x it is.
- step S206 the cause of the vertical streaks Ps11 and the causes of the horizontal streaks Ps12 and the noise points Ps13 determined as the aperiodic singularities are determined.
- the pattern recognition unit 8g ends the peculiar defect determination process after executing the process of step S206.
- step S301, S302, ... Represent the identification code of a plurality of steps in the density unevenness determination process.
- the density unevenness determination process is started from step S301.
- step S301 the periodicity determination unit 8f derives a vertical data sequence VD1 for each predetermined specific color for the test image g1.
- the specific color is a color corresponding to the developed color of the image forming apparatus 2.
- the vertical data string VD1 is a data string having a representative value V1 of a plurality of pixel values for each line in the main scanning direction D1 in the image of the specific color constituting the test image g1 (see FIG. 8).
- the specific color is three of the four developed colors of the image forming apparatus 2.
- the periodicity determination unit 8f converts the red, green, and blue image data constituting the test image g1 into cyan, yellow, and magenta image data.
- the periodicity determination unit 8f derives a representative value V1 of a plurality of pixel values for each line in the main scanning direction D1 for each of the three image data of the specific color corresponding to the test image g1 to obtain cyan.
- Three vertical data strings VD1 corresponding to yellow and magenta are derived.
- the specific color may be the three primary colors of red, green and blue.
- the periodicity determination unit 8f sets the pixel value of each of the three image data of red, green, and blue in the test image g1 to the pixel value of each of the three image data of red, green, and blue in the test image g1. Convert to a value that represents the ratio to the average or total value. Further, the periodicity determination unit 8f derives three vertical data strings VD1 for the three converted image data.
- red is the color corresponding to cyan
- green is the color corresponding to magenta
- blue is the color corresponding to yellow. That is, cyan density unevenness appears as density unevenness in the converted red image data, magenta density unevenness appears as density unevenness in the converted green image data, and yellow density unevenness appears as density unevenness in the converted blue image data. It appears as density unevenness in the image data of.
- the representative value V1 is an average value, a maximum value, or a minimum value of the remaining pixel values obtained by excluding the pixel values of the singular portion Ps1 from all the pixel values of the line in the main scanning direction D1.
- the representative value V1 may be an average value, a maximum value, a minimum value, or the like of all the pixel values of the line in the main scanning direction D1.
- the periodicity determination unit 8f transfers the process to the process S302 after executing the process of the process S301.
- step S302 the periodicity determination unit 8f executes a periodic unevenness determination process for the vertical data sequence VD1 for each specific color.
- the periodicity determination unit 8f identifies the density unevenness frequency, which is the dominant frequency in the frequency distribution of the vertical data sequence VD1, by performing frequency analysis such as Fourier transform for each of the vertical data sequence VD1.
- the periodicity determination unit 8f derives the cycle corresponding to the density unevenness frequency as the density unevenness cycle in the test image g1.
- the periodicity determination unit 8f determines whether or not the outer peripheral length of each candidate satisfies the periodic approximation condition with the period of the density unevenness for the plurality of predetermined candidates of the image forming rotating body. do. If it is determined that any one of the plurality of candidates for the image forming rotating body satisfies the periodic approximation condition, it means that it is determined that the periodic density unevenness occurs in the test image g1.
- the plurality of candidates for the image-forming rotating body in step S302 are examples of a plurality of predetermined cause candidates corresponding to the periodic density unevenness in the test image g1.
- the periodic density unevenness is an example of an image defect.
- the periodicity determination unit 8f causes the periodic density unevenness based on the developed color corresponding to the vertical data sequence VD1 and the candidate of the image forming rotating body determined to satisfy the periodic approximation condition. judge.
- the pixel values vary in all of the red, green, and blue image data constituting the test image g1.
- the black image forming unit 4x determines that the periodic density unevenness occurs. Is determined to be the cause of.
- the periodicity determination unit 8f ends the density unevenness determination process when it is determined that the periodic density unevenness has occurred in the test image g1, and shifts the process to step S303 when it does not.
- step S303 the random unevenness determination unit 8h determines whether or not random density unevenness has occurred for each of the three image data of the specific color corresponding to the test image g1.
- the random density unevenness is a kind of image defect.
- the random unevenness determination unit 8h determines whether or not the random density unevenness has occurred by determining whether or not the variation in pixel values for each of the three image data of the specific color exceeds a predetermined allowable range.
- the magnitude of the variation in the pixel values is determined by the variance, standard deviation, or the difference between the median value and the maximum value and the minimum value in each of the image data of the specific color.
- the black image forming unit 4x determines that the random density unevenness is the cause.
- the random unevenness determination unit 8h shifts the process to step S304 when it is determined that the random density unevenness has occurred in the test image g1, and ends the density unevenness determination process when it is not.
- step S304 the pattern recognition unit 8g executes a random pattern recognition process.
- the test image g1 determined to have the random density unevenness is used as an input image, and the input image is selected as one or more of the cause candidates by pattern recognition of the input image. It is a process of determining whether or not it corresponds.
- the pattern recognition unit 8g ends the density unevenness determination process after executing the process of step S304.
- the CPU 80 executing the image defect determination process including the peculiar defect determination process and the density unevenness determination process causes the image defect based on the test image g1 read from the output sheet of the image forming apparatus 2. This is an example of the image processing method for determination.
- the feature image generation unit 8c generates the first preprocessed image g11 by executing the first preprocessing including the main filter processing with the lateral direction of the test image g1 as the processing direction Dx1. do.
- the pixel values of the attention pixel Px1 sequentially selected from the test image g1 are set to the pixel values of the attention region Ax1 and two adjacent regions adjacent to both sides in the processing direction Dx1 preset for the attention region Ax1. This is a process of converting to a converted value obtained by a process of emphasizing the difference from the pixel value of Ax2 (see steps S201 and FIG. 6 of FIG. 4).
- the feature image generation unit 8c generates the second preprocessed image g12 by executing the second preprocessing including the main filter processing in which the vertical direction of the test image g1 is the processing direction Dx1 (FIG. 4). Step S201 and FIG. 6).
- the feature image generation unit 8c extracts the vertical streaks Ps11, the horizontal streaks Ps12, and the noise point Ps13 from the one or more singular portions Ps1 in the first preprocessed image g11 and the second preprocessed image g12 as the image defects, respectively. (See step S201 and FIG. 6 in FIG. 4).
- the feature extraction process in step S201 is a simple process with a small calculation load. By such a simple process, it is possible to generate three feature images g21, g22, and g23 in which singular portions Ps1 having different shapes are individually extracted from one test image g1.
- the periodicity determination unit 8f and the pattern recognition unit 8g execute the periodic singularity determination process and the feature pattern recognition process using the first feature image g21, the second feature image g22, and the third feature image g23. Thereby, the causes of the vertical streaks Ps11, the horizontal streaks Ps12, and the noise points Ps13, which are one of the image defects, are determined (see steps S205 and S206 of FIG. 4 and FIG. 6).
- the cause of the image defect is determined individually for the three feature images g21, g22, and g23 including the peculiar portion Ps1 of different types, so that the cause of the image defect can be determined with high accuracy by a relatively simple determination process. Can be determined.
- the periodic singularity determination process in step S205 determines the presence or absence of one or more predetermined periodicities in the sub-scanning direction D2 for the second feature image g22 or the third feature image g23, and determines the presence or absence of the periodicity. This is a process for determining the cause of the horizontal streaks Ps12 or the noise point Ps13 according to the determination result.
- the horizontal streaks Ps12 or the noise points Ps13 are caused by the defect of the rotating body related to the image formation
- the horizontal streaks Ps12 are subjected to the periodic singular part determination process for determining the periodicity corresponding to the outer peripheral length of the rotating body.
- the cause of the noise point Ps13 can be determined with high accuracy.
- the input image corresponds to any of a plurality of predetermined cause candidates corresponding to the vertical streaks, the horizontal streaks, and the noise points by pattern recognition of the input image. It is a process of determining whether or not.
- the first feature image g21 and the second feature image g22 and the third feature image g23 that are determined to have no periodicity by the periodic singularity determination process are the input images in step S206. (See steps S204 to S206 in FIG. 4).
- the periodic singular part determination process in step S205 and the feature pattern recognition process in step S206 are examples of predetermined cause determination processes using the first feature image g21, the second feature image g22, and the third feature image g23. Is.
- the feature pattern recognition process using a learning model or the like is performed for each feature image g21, g22, g23 from which a specific type of singular portion Ps1 is extracted. This makes it possible to determine the cause of the image defect with high accuracy while suppressing the amount of calculation of the CPU 80. Further, the learning model for each type of the singular portion Ps1 can be sufficiently learned only by preparing a relatively small amount of teacher data corresponding to the specific type of the singular portion Ps1.
- step S206 among the plurality of feature images g21, g22, g23, the first feature image g21 and the step S205, which were not the targets of the periodic singular part determination process of step S205, and the step S205. This is executed for the second feature image g22 or the third feature image g23 determined to have no periodicity by the periodic singularity determination process (see steps S204 to S206 in FIG. 4).
- the cause of the image defect is the cause corresponding to the periodicity of the image-making-related rotating body. This further simplifies the feature pattern recognition process.
- the color vector specifying unit 8e specifies the color vector representing a vector in the color space from one of the colors of the reference region including the color of the singular part Ps1 and the periphery of the singular part Ps1 in the test image g1 to the other. (See step S203 in FIG. 4).
- the periodicity determination unit 8f in step S205 and the pattern recognition unit 8g in step S206 determine the causes of the vertical streaks Ps11, the horizontal streaks Ps12, and the noise points Ps13 by further using the color vector in the cause determination process. That is, the periodicity determination unit 8f and the pattern recognition unit 8g determine which of the plurality of developed colors in the image forming apparatus 2 corresponds to the cause of the image defect based on the color vector.
- the color vector in the image forming apparatus 2 capable of printing a color image, it is possible to easily determine which of the plurality of developed colors the image defect is caused by. Moreover, it is possible to make a reliable determination.
- the periodicity determination unit 8f executes the periodic unevenness determination process for each of the specific colors predetermined for the test image g1 (see steps S301 and S302 in FIG. 5).
- the periodic unevenness determination process determines the presence or absence of one or more predetermined periodicities in the sub-scanning direction D2, and further, according to the determination result of the periodicity, the periodicity which is a kind of the image defect. This is a process for determining the presence / absence and cause of density unevenness.
- the cause of the periodic density unevenness can be determined with high accuracy.
- the random unevenness determination unit 8h sets a predetermined allowable range of variation in the pixel value for each specific color for the test image g1 determined to have no periodicity by the periodic unevenness determination process in step S302.
- the presence or absence of the random density unevenness is determined by determining whether or not it exceeds (see step S303 in FIG. 5).
- the random density unevenness is a kind of image defect.
- the pattern recognition unit 8g executes the random pattern recognition process using the test image g1 determined to have the random density unevenness as the input image (see step S304 in FIG. 5). In the random pattern recognition process, which of the one or more cause candidates the input image corresponds to is determined by the pattern recognition of the input image.
- the test image g1 is a mixed color halftone image in which a plurality of uniform monochromatic halftone images corresponding to a plurality of developed colors in the image forming apparatus 2 are combined.
- the CPU 80 can determine the cause of the image defect for all the developed colors in the image forming apparatus 2 by using the test image g1 which is smaller than the number of developed colors used by the image forming apparatus 2.
- S401, S402, ... Represent the identification codes of a plurality of steps in the feature image generation process according to the present application example.
- the feature image generation process according to this application example is started from step S401.
- step S401 the feature image generation unit 8c selects a compression rate to be adopted from a plurality of preset compression rate candidates, and shifts the process to step S402.
- step S402 the feature image generation unit 8c generates the test image g1 by compressing the read image at the selected compression rate.
- the processing of steps S401 and S402 is an example of compression processing. After that, the feature image generation unit 8c shifts the process to the step S403.
- step S403 the feature image generation unit 8c generates the first preprocessed image g11 by executing the first preprocessing on the compressed test image g1 obtained in step S402. After that, the feature image generation unit 8c shifts the process to the step S404.
- step S404 the feature image generation unit 8c generates the second preprocessed image g12 by executing the second preprocessing on the compressed test image g1 obtained in step S402. After that, the feature image generation unit 8c shifts the process to step S405.
- step S405 the feature image generation unit 8c shifts the processing to step S406 when the processing of steps S401 to S404 is executed for all of the plurality of compression rate candidates, and if not, the different compression rates. The process of steps S401 to S404 is executed.
- the feature image generation unit 8c generates a plurality of test images g1 having different sizes by compressing the read image at each of a plurality of compression rates in the compression process of steps S401 and S402.
- the feature image generation unit 8c performs the first preprocessing and the second preprocessing on the plurality of test images g1 in the steps S403 and S404, whereby a plurality of test images g1 corresponding to each of the plurality of test images g1 are executed. A first preprocessed image g11 and a plurality of second preprocessed images g12 are generated.
- step S406 the feature image generation unit 8c executes the singular part extraction process for each of the plurality of first preprocessed images g11 and the plurality of second preprocessed images g12. As a result, the feature image generation unit 8c generates a plurality of candidates for each of the first feature image g21, the second feature image g22, and the third feature image g23 corresponding to the plurality of test images g1. After that, the feature image generation unit 8c shifts the process to step S407.
- step S407 the feature image generation unit 8c generates the first feature image g21, the second feature image g22, and the third feature image g23 by aggregating the plurality of candidates obtained in step S406. After that, the feature image generation unit 8c ends the feature image generation process.
- the feature image generation unit 8c sets representative values such as the maximum value or the average value of each pixel value in the plurality of candidates of the first feature image g21 as each pixel value of the first feature image g21. The same applies to the second feature image g22 and the third feature image g23.
- steps S401 to S404 are performed by executing the first pretreatment and the second pretreatment a plurality of times in which the size ratios of the test image g1 and the sizes of the region of interest Ax1 and the adjacent region Ax2 are different.
- This is an example of a process for generating one preprocessed image g11 and a plurality of second preprocessed images g12.
- Changing the compression ratio is an example of changing the size ratio between the test image g1 and the sizes of the region of interest Ax1 and the adjacent region Ax2.
- the first feature image g21, the second feature image g22, and the third feature image g22 are processed by the singular part extraction process based on the plurality of first preprocessed images g11 and the plurality of second preprocessed images g12. This is an example of a process for generating the feature image g23.
- S501, S502, ... Represent the identification codes of a plurality of steps in the feature image generation process according to the present application example.
- the feature image generation process according to this application example is started from step S501.
- the feature image generation unit 8c executes the processes of steps S501 to S505, which are the same processes as those of steps S401 to S405. In step S505, the feature image generation unit 8c shifts the process to step S506 when the processes of steps S501 to S504 are executed for all of the plurality of compressibility candidates.
- step S506 the feature image generation unit 8c aggregates the plurality of first preprocessed images g11 and the plurality of the second preprocessed images into one. After that, the feature image generation unit 8c shifts the process to step S507.
- the feature image generation unit 8c sets representative values such as the maximum value or the average value of each pixel value in the plurality of first preprocessed images g11 as each pixel value of the first feature image g21 after aggregation. The same applies to the plurality of second preprocessed images g12.
- step S506 the feature image generation unit 8c executes the singular part extraction process on the first preprocessed image g11 and the second preprocessed image g12 after aggregation, thereby causing the first feature image g21 and the second feature.
- the image g22 and the third feature image g23 are generated.
- the feature image generation unit 8c ends the feature image generation process.
- S601, S602, ... Represent the identification codes of a plurality of steps in the feature image generation process according to the present application example.
- the feature image generation process according to this application example is started from step S601.
- filter sizes are referred to as filter sizes.
- step S601 the feature image generation unit 8c selects the filter size to be adopted from a plurality of preset size candidates, and shifts the process to step S602.
- step S602 the feature image generation unit 8c generates the first preprocessed image g11 by executing the first preprocessing with the filter size selected in step S601 on the test image g1. After that, the feature image generation unit 8c shifts the process to step S603.
- step S603 the feature image generation unit 8c generates the second preprocessed image g12 by executing the second preprocessing with the filter size selected in step S601 on the test image g1. After that, the feature image generation unit 8c shifts the process to step S604.
- step S604 the feature image generation unit 8c shifts the process to step S605 when the processes of steps S601 to S603 are executed for all of the plurality of size candidates, and if not, the different filter sizes are used. The processes of steps S601 to S603 are executed.
- the feature image generation unit 8c performs a plurality of times of the first pretreatment and a plurality of times of the second pretreatment having different sizes of the region of interest Ax1 and the adjacent region Ax2 for one test image g1. Run. As a result, the feature image generation unit 8c generates a plurality of first preprocessed images g11 and a plurality of second preprocessed images g12.
- the feature image generation unit 8c performs the same processing as in steps S406 and S407 of FIG. 9 in steps S605 and S606. After that, the feature image generation unit 8c ends the feature image generation process.
- a plurality of candidates of the first feature image g21, the second feature image g22, and the third feature image g23 are aggregated, and the aggregated first feature image g21, second feature image g22, and third feature image g23 are aggregated.
- the feature image g23 is generated.
- steps S601 to S604 are performed by executing the first pretreatment and the second pretreatment a plurality of times in which the size ratios of the test image g1 and the sizes of the region of interest Ax1 and the adjacent region Ax2 are different.
- This is an example of a process for generating one preprocessed image g11 and a plurality of second preprocessed images g12.
- Changing the filter size is an example of changing the size ratio between the test image g1 and the sizes of the region of interest Ax1 and the adjacent region Ax2.
- S701, S702, ... Represent the identification codes of a plurality of steps in the feature image generation process according to the present application example.
- the feature image generation process according to this application example is started from step S701.
- the feature image generation unit 8c executes the processes of steps S701 to S704, which are the same processes as those of steps S601 to S604. In step S704, the feature image generation unit 8c shifts the process to step S705 when the processes of steps S701 to S703 are executed for all of the plurality of size candidates.
- the feature image generation unit 8c executes the processes of steps S705 and S706, which are the same processes as those of steps S506 and S507. After that, the feature image generation unit 8c ends the feature image generation process.
- the feature image generation unit 8c is a pixel constituting the singular part Ps1 by comparing each pixel value of the first preprocessed image g11 and the second preprocessed image g12 with a predetermined reference range. Distinguish between pixels that are not.
- the feature image generation unit 8c specifies the singular portion Ps1 by the size of each pixel value of the first preprocessed image g11 and the second preprocessed image g12 in the singular portion extraction process.
- the feature image generation unit 8c extracts the vertical streaks Ps11 by removing the singular portion Ps1 common to the first preprocessed image g11 and the second preprocessed image g12 from the singular portion Ps1 of the first preprocessed image g11.
- the feature image generation unit 8c extracts the horizontal streaks Ps12 by removing the singular portion Ps1 common to the first preprocessed image g11 and the second preprocessed image g12 from the singular portion Ps1 of the second preprocessed image g12.
- the feature image generation unit 8c extracts the singular portion Ps1 common to the first preprocessed image g11 and the second preprocessed image g12 as the noise point Ps13.
- the feature image generation unit 8c generates the first feature image g21 by converting the first pixel value Xi determined other than the vertical stripe Ps11 in the first preprocessed image g11 into an interpolated value based on the surrounding pixel values. do.
- the feature image generation unit 8c converts the second pixel value Yi determined other than the horizontal stripe Ps12 in the second preprocessed image g12 into an interpolated value based on the surrounding pixel values, thereby converting the second feature image g22. Generate.
- the feature image generation unit 8c converts the first pixel value Xi determined other than the noise point Ps13 in the first preprocessed image g11 into an interpolated value based on the surrounding pixel values, thereby converting the third feature image g23. Generate.
- the feature image generation unit 8c generates the third feature image g23 by converting the second pixel value Yi determined other than the noise point Ps13 in the second preprocessed image g12 into an interpolated value based on the surrounding pixel values. You may.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
Description
実施形態に係る画像処理装置10は、プリント処理を実行する画像形成装置2を備える。前記プリント処理は、シートに画像を形成する処理である。前記シートは、用紙またはシート状の樹脂部材などの画像形成媒体である。
以下、図3に示されるフローチャートを参照しつつ、前記画像不良判定処理の手順の一例について説明する。以下の説明において、S101,S102,…は、前記画像不良判定処理における複数の工程の識別符号を表す。
工程S101において、特徴画像生成部8cは、テスト出力シート9に対する前記画像読取処理に得られた前記読取画像からテスト画像g1を生成する。
工程S102において、特徴画像生成部8cは、後述する特異不良判定処理を開始する。前記特異不良判定処理は、テスト画像g1における縦スジPs11、横スジPs12またはノイズ点Ps13などの特異部Ps1の有無および特異部Ps1の発生原因を判定する処理である(図6参照)。特異部Ps1は、前記画像不良の一例である。
工程S103において、周期性判定部8fは、後述する濃度ムラ判定処理を開始する。さらに、主制御部8aは、前記濃度ムラ判定処理が終了したときに、処理を工程S104へ移行させる。
工程S104において、主制御部8aは、工程S102または工程S103の処理によって前記画像不良が生じていると判定された場合に処理を工程S105へ移行させ、そうでない場合に処理を工程S106へ移行させる。
工程S105において、主制御部8aは、工程S102または工程S103の処理によって発生していると判定された前記画像不良の種類および原因に対し予め対応付けられた不良対応処理を実行する。
一方、工程S106において、主制御部8aは、前記画像不良が特定されなかったことを表す正常通知を実行した上で、前記画像不良判定処理を終了させる。
続いて、図4に示されるフローチャートを参照しつつ、工程S102の前記特異不良判定処理の手順の一例について説明する。以下の説明において、S201,S202,…は、前記特異不良判定処理における複数の工程の識別符号を表す。前記特異不良判定処理は、工程S201から開始される。
まず、工程S201において、特徴画像生成部8cは、テスト画像g1に対して予め定められた特徴抽出処理を実行することにより複数の特徴画像g21,g22,g23を生成する。特徴画像g21,g22,g23各々は、テスト画像g1における予め定められた特定の種類の特異部Ps1が抽出された画像である。
工程S202において、特異部特定部8dは、特徴画像g21,g22,g23それぞれにおける特異部Ps1の位置を特定する。
工程S203において、色ベクトル特定部8eは、テスト画像g1における特異部Ps1の色および特異部Ps1の周辺を含む参照領域の色の一方から他方への色空間内のベクトルを表す色ベクトルを特定する。
工程S204において、周期性判定部8fは、第2特徴画像g22および第3特徴画像g23の一方または両方において特異部Ps1が特定されている場合に、処理を工程S205へ移行させ、そうでない場合に処理を工程S206へ移行させる。
工程S205において、周期性判定部8fは、前記周期性判定対象画像ついて周期的特異部判定処理を実行する。前記周期的特異部判定処理は、数判定処理と、特異部周期性判定処理と、特異部周期性原因判定処理とを含む。
工程S206において、パターン認識部8gは、第1特徴画像g21と、それぞれ前記非周期的特異部を含む第2特徴画像g22および第3特徴画像g23とのそれぞれについて特徴パターン認識処理を実行する。前記非周期的特異部を含む第2特徴画像g22、または、前記非周期的特異部を含む第3特徴画像g23は、非周期的特徴画像の一例である。
続いて、図5に示されるフローチャートを参照しつつ、工程S103の前記濃度ムラ判定処理の手順の一例について説明する。以下の説明において、S301,S302,…は、前記濃度ムラ判定処理における複数の工程の識別符号を表す。前記濃度ムラ判定処理は、工程S301から開始される。
工程S301において、周期性判定部8fは、テスト画像g1について、予め定められた特定色ごとに縦データ列VD1を導出する。前記特定色は、画像形成装置2の現像色に対応する色である。縦データ列VD1は、テスト画像g1を構成する前記特定色の画像における主走査方向D1のラインごとの複数の画素値の代表値V1のデータ列である(図8参照)。
工程S302において、周期性判定部8fは、前記特定色ごとの縦データ列VD1について周期的ムラ判定処理を実行する。
工程S303において、ランダムムラ判定部8hは、テスト画像g1に対応する3つの前記特定色の画像データそれぞれについてランダム濃度ムラの発生有無を判定する。前記ランダム濃度ムラは、前記画像不良の一種である。
工程S304において、パターン認識部8gは、ランダムパターン認識処理を実行する。前記ランダムパターン認識処理は、前記ランダム濃度ムラが発生していると判定されたテスト画像g1を入力画像とし、前記入力画像のパターン認識により、前記入力画像が1つ以上の前記原因候補のいずれに対応するかを判定する処理である。
次に、図9に示されるフローチャートを参照しつつ、画像処理装置10の第1応用例における前記特徴画像生成処理の手順について説明する。
工程S401において、特徴画像生成部8cは、予め設定された複数の圧縮率候補の中から採用する圧縮率を選択し、処理を工程S402へ移行させる。
工程S402において、特徴画像生成部8cは、選択された前記圧縮率で前記読取画像を圧縮することによりテスト画像g1を生成する。工程S401,S402の処理は圧縮処理の一例である。その後、特徴画像生成部8cは、処理を工程S403へ移行させる。
工程S403において、特徴画像生成部8cは、工程S402で得られる圧縮後のテスト画像g1に前記第1前処理を実行することにより、第1前処理画像g11を生成する。その後、特徴画像生成部8cは、処理を工程S404へ移行させる。
工程S404において、特徴画像生成部8cは、工程S402で得られる圧縮後のテスト画像g1に前記第2前処理を実行することにより、第2前処理画像g12を生成する。その後、特徴画像生成部8cは、処理を工程S405へ移行させる。
工程S405において、特徴画像生成部8cは、前記複数の圧縮率候補の全てについて工程S401からS404の処理が実行された場合に、処理を工程S406へ移行させ、そうでない場合に、異なる前記圧縮率について工程S401~S404の処理を実行する。
工程S406において、特徴画像生成部8cは、複数の第1前処理画像g11および複数の第2前処理画像g12のそれぞれに対して前記特異部抽出処理を実行する。これにより、特徴画像生成部8cは、複数のテスト画像g1に対応する第1特徴画像g21、第2特徴画像g22および第3特徴画像g23のそれぞれの複数の候補を生成する。その後、特徴画像生成部8cは、処理を工程S407へ移行させる。
工程S407において、特徴画像生成部8cは、工程S406で得られる前記複数の候補を集約することにより、第1特徴画像g21、第2特徴画像g22および第3特徴画像g23を生成する。その後、特徴画像生成部8cは、前記特徴画像生成処理を終了させる。
次に、図10に示されるフローチャートを参照しつつ、画像処理装置10の第2応用例における前記特徴画像生成処理の手順について説明する。
特徴画像生成部8cは、工程S401~S405と同じ処理である工程S501~S505の処理を実行する。工程S505において、特徴画像生成部8cは、前記複数の圧縮率候補の全てについて工程S501からS504の処理が実行された場合に、処理を工程S506へ移行させる。
工程S506において、特徴画像生成部8cは、複数の第1前処理画像g11および複数の前記第2前処理画像をそれぞれ1つに集約する。その後、特徴画像生成部8cは、処理を工程S507へ移行させる。
工程S506において、特徴画像生成部8cは、集約後の第1前処理画像g11および第2前処理画像g12に対して前記特異部抽出処理を実行することにより、第1特徴画像g21、第2特徴画像g22および第3特徴画像g23を生成する。その後、特徴画像生成部8cは、前記特徴画像生成処理を終了させる。
次に、図11に示されるフローチャートを参照しつつ、画像処理装置10の第3応用例における前記特徴画像生成処理の手順について説明する。
工程S601において、特徴画像生成部8cは、予め設定された複数のサイズ候補の中から採用する前記フィルターサイズを選択し、処理を工程S602へ移行させる。
工程S602において、特徴画像生成部8cは、テスト画像g1に工程S601で選択された前記フィルターサイズでの前記第1前処理を実行することにより、第1前処理画像g11を生成する。その後、特徴画像生成部8cは、処理を工程S603へ移行させる。
工程S603において、特徴画像生成部8cは、テスト画像g1に工程S601で選択された前記フィルターサイズでの前記第2前処理を実行することにより、第2前処理画像g12を生成する。その後、特徴画像生成部8cは、処理を工程S604へ移行させる。
工程S604において、特徴画像生成部8cは、前記複数のサイズ候補の全てについて工程S601からS603の処理が実行された場合に、処理を工程S605へ移行させ、そうでない場合に、異なる前記フィルターサイズによって工程S601~S603の処理を実行する。
特徴画像生成部8cは、工程S605およびS606において、図9の工程S406およびS407と同じ処理を実行する。その後、特徴画像生成部8cは、前記特徴画像生成処理を終了させる。
次に、図11に示されるフローチャートを参照しつつ、画像処理装置10の第4応用例における前記特徴画像生成処理の手順について説明する。
特徴画像生成部8cは、工程S601~S604と同じ処理である工程S701~S704の処理を実行する。工程S704において、特徴画像生成部8cは、前記複数のサイズ候補の全てについて工程S701からS703の処理が実行された場合に、処理を工程S705へ移行させる。
さらに特徴画像生成部8cは、工程S506,S507と同じ処理である工程S705,S706の処理を実行する。その後、特徴画像生成部8cは、前記特徴画像生成処理を終了させる。
次に、画像処理装置10の第5応用例における前記特徴画像生成処理について説明する。
Claims (20)
- プロセッサーが、画像形成装置の出力シートに対する画像読取処理を通じて得られるテスト画像における画像不良を判定する画像処理方法であって、
前記プロセッサーが、前記テスト画像から順次選択される注目画素の画素値を、前記注目画素を含む注目領域の画素値と前記注目領域に対し予め設定される処理方向において両側に隣接する2つの隣接領域の画素値との差を強調する処理により得られる変換値へ変換する主フィルター処理を含む第1前処理を、前記テスト画像の横方向を前記処理方向として実行することによって第1前処理画像を生成することと、
前記プロセッサーが、前記テスト画像の縦方向を前記処理方向とする前記主フィルター処理を含む第2前処理を実行することによって第2前処理画像を生成することと、
前記プロセッサーが、前記第1前処理画像および前記第2前処理画像における1つ以上の有意な画素からなる特異部のうち、前記第1前処理画像に存在し前記第1前処理画像および前記第2前処理画像に共通しない第1特異部と、前記第2前処理画像に存在し前記第1前処理画像および前記第2前処理画像に共通しない第2特異部と、前記第1前処理画像および前記第2前処理画像に共通する第3特異部と、をそれぞれ前記画像不良として抽出する特異部抽出処理を実行することと、を含む画像処理方法。 - 前記第1前処理は、
前記横方向を前記処理方向とする前記主フィルター処理を実行することによって第1主マップデータを生成することと、
前記テスト画像に対し、前記横方向を前記処理方向として前記注目領域と前記2つの隣接領域の一方とを対象とするエッジ強調フィルター処理を実行することによって横エッジ強度マップデータを生成することと、
前記第1主マップデータの各画素値を対応する前記横エッジ強度マップデータの各画素値で補正することにより前記第1前処理画像を生成することと、を含み、
前記第2前処理は、
前記縦方向を前記処理方向とする前記主フィルター処理を実行することによって第2主マップデータを生成することと、
前記テスト画像に対し、前記縦方向を前記処理方向として前記注目領域と前記2つの隣接領域の一方とを対象とする前記エッジ強調フィルター処理を実行することによって縦エッジ強度マップデータを生成することと、
前記第2主マップデータの各画素値を対応する前記縦エッジ強度マップデータの各画素値で補正することにより前記第2前処理画像を生成することと、を含む、請求項1に記載の画像処理方法。 - 前記プロセッサーは、前記特異部抽出処理において、前記第1前処理画像および前記第2前処理画像における対応する各画素値の差の指標値を導出し、前記第1前処理画像の各画素値を前記指標値に基づく予め定められた第1変換式で変換する処理により前記第1特異部を抽出し、前記第2前処理画像の各画素値を前記指標値に基づく予め定められた第2変換式で変換する処理により前記第2特異部を抽出し、前記第1前処理画像または前記第2前処理画像の各画素値を前記指標値に基づく予め定められた第3変換式で変換する処理により前記第3特異部を抽出する、請求項1に記載の画像処理方法。
- 前記プロセッサーは、前記特異部抽出処理において、前記第1前処理画像および前記第2前処理画像の各画素値の大きさにより前記特異部を特定し、前記第1前処理画像の前記特異部から前記第1前処理画像および前記第2前処理画像に共通の前記特異部を除くことにより前記第1特異部を抽出し、前記第2前処理画像の前記特異部から前記第1前処理画像および前記第2前処理画像に共通の前記特異部を除くことにより前記第2特異部を抽出し、前記第1前処理画像および前記第2前処理画像に共通の前記特異部を前記第3特異部として抽出する、請求項1に記載の画像処理方法。
- 前記プロセッサーが、前記出力シートに対する前記画像読取処理により得られる読取画像を圧縮することにより前記テスト画像を生成する圧縮処理を実行することをさらに含む、請求項1に記載の画像処理方法。
- 前記プロセッサーは、前記圧縮処理において、前記読取画像を複数の圧縮率それぞれで圧縮することによりサイズの異なる複数の前記テスト画像を生成し、
さらに前記プロセッサーは、複数の前記テスト画像に対して前記第1前処理および前記第2前処理を実行することにより、それぞれ複数の前記テスト画像に対応する複数の前記第1前処理画像および複数の前記第2前処理画像を生成し、
さらに前記プロセッサーは、複数の前記第1前処理画像および複数の前記第2前処理画像に基づく前記特異部抽出処理により、前記第1特異部、前記第2特異部および前記第3特異部を抽出する、請求項5に記載の画像処理方法。 - 前記プロセッサーは、1つの前記テスト画像に対して前記注目領域および前記隣接領域のサイズの異なる複数回の前記第1前処理および複数回の前記第2前処理を実行することにより複数の前記第1前処理画像および複数の前記第2前処理画像を生成し、
さらに前記プロセッサーは、複数の前記第1前処理画像および複数の前記第2前処理画像に基づく前記特異部抽出処理により、前記第1特異部、前記第2特異部および前記第3特異部を抽出する、請求項1に記載の画像処理方法。 - 前記プロセッサーは、複数の前記第1前処理画像および複数の前記第2前処理画像のそれぞれに対して前記特異部抽出処理を実行することにより、複数の前記テスト画像に対応する前記第1特異部、前記第2特異部および前記第3特異部のそれぞれの複数の候補を抽出し、さらに前記複数の候補を集約することにより前記第1特異部、前記第2特異部および前記第3特異部を抽出する、請求項6に記載の画像処理方法。
- 前記プロセッサーは、複数の前記第1前処理画像および複数の前記第2前処理画像をそれぞれ1つに集約し、集約後の前記第1前処理画像および前記第2前処理画像に対して前記特異部抽出処理を実行することにより前記第1特異部、前記第2特異部および前記第3特異部を抽出する、請求項6に記載の画像処理方法。
- 前記プロセッサーは、前記特異部抽出処理において、前記第1前処理画像から前記第1特異部が抽出された第1特徴画像と、前記第2前処理画像から前記第2特異部が抽出された第2特徴画像と、前記第1前処理画像または前記第2前処理画像から前記第3特異部が抽出された第3特徴画像と、を生成する、請求項1に記載の画像処理方法。
- 前記プロセッサーが、前記第1特徴画像、前記第2特徴画像および前記第3特徴画像を用いた予め定められた原因判定処理を実行することにより前記第1特異部、前記第2特異部および前記第3特異部の原因を判定することをさらに含む、請求項10に記載の画像処理方法。
- 前記原因判定処理は、
前記第2特徴画像または前記第3特徴画像について前記縦方向における予め定められた1つ以上の周期性の有無を判定し、前記周期性の判定結果に応じて前記第2特異部または前記第3特異部の原因を判定する周期的特異部判定処理を含む、請求項11に記載の画像処理方法。 - 前記原因判定処理は、
前記第2特徴画像または前記第3特徴画像から前記周期性に同期する前記第2特異部または前記第3特異部が除去された非周期的特徴画像を生成する処理と、
前記非周期的特徴画像を入力画像とし、前記入力画像のパターン認識により、前記入力画像が前記第2特異部または前記第3特異部に対応する予め定められた複数の原因候補のいずれに対応するかを判定する特徴パターン認識処理と、を含む、請求項12に記載の画像処理方法。 - 前記特徴パターン認識処理は、
前記第1特徴画像を前記入力画像とし、前記入力画像の前記パターン認識により、前記入力画像が前記第1特異部に対応する予め定められた複数の原因候補のいずれに対応するかを判定する処理を含む、請求項13に記載の画像処理方法。 - 前記特徴パターン認識処理は、前記複数の原因候補に対応する複数のサンプル画像を教師データとして予め学習された学習モデルにより、前記入力画像を前記複数の原因候補のいずれかに分類する処理である、請求項13に記載の画像処理方法。
- 前記プロセッサーが、前記テスト画像における前記特異部の色および前記特異部の周辺を含む参照領域の色の一方から他方への色空間内のベクトルを表す色ベクトルを特定すること、をさらに含み、
前記プロセッサーは、前記原因判定処理において、前記色ベクトルをさらに用いて前記第1特異部、前記第2特異部または前記第3特異部の原因を判定する、請求項11に記載の画像処理方法。 - 前記プロセッサーが、前記テスト画像について予め定められた色ごとに前記縦方向における予め定められた1つ以上の周期性の有無を判定し、前記周期性の判定結果に応じて前記画像不良の一種である周期的濃度ムラの発生有無および原因を判定する周期的ムラ判定処理を実行すること、をさらに含む、請求項11に記載の画像処理方法。
- 前記プロセッサーが、前記周期的ムラ判定処理により前記周期性が無いと判定された前記テスト画像について、予め定められた色ごとに画素値のばらつきが予め定められた許容範囲を超えるか否かを判定することにより前記画像不良の一種であるランダム濃度ムラの発生有無を判定すること、をさらに含む、請求項17に記載の画像処理方法。
- 前記プロセッサーが、前記ランダム濃度ムラが発生していると判定された前記テスト画像を入力画像とし、前記入力画像のパターン認識により、前記入力画像が予め定められた1つ以上の前記画像不良の原因候補のいずれに対応するかを判定するランダムパターン認識処理を実行すること、をさらに含む、請求項18に記載の画像処理方法。
- 請求項1に記載の画像処理方法の処理を実行するプロセッサーを備える画像処理装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/031,585 US20230260102A1 (en) | 2020-12-24 | 2021-12-22 | Image processing method and image processing apparatus |
CN202180069941.9A CN116490374A (zh) | 2020-12-24 | 2021-12-22 | 图像处理方法、图像处理装置 |
JP2022571529A JPWO2022138685A1 (ja) | 2020-12-24 | 2021-12-22 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020214817 | 2020-12-24 | ||
JP2020-214817 | 2020-12-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022138685A1 true WO2022138685A1 (ja) | 2022-06-30 |
Family
ID=82159791
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/047457 WO2022138685A1 (ja) | 2020-12-24 | 2021-12-22 | 画像処理方法、画像処理装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230260102A1 (ja) |
JP (1) | JPWO2022138685A1 (ja) |
CN (1) | CN116490374A (ja) |
WO (1) | WO2022138685A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017013378A (ja) * | 2015-07-01 | 2017-01-19 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
JP2020127096A (ja) * | 2019-02-01 | 2020-08-20 | コニカミノルタ株式会社 | 検査装置、検査方法および検査プログラム |
JP2020154016A (ja) * | 2019-03-18 | 2020-09-24 | コニカミノルタ株式会社 | 画像形成装置及び画像検査方法 |
JP2020177041A (ja) * | 2019-04-15 | 2020-10-29 | キヤノン株式会社 | 画像形成装置 |
-
2021
- 2021-12-22 WO PCT/JP2021/047457 patent/WO2022138685A1/ja active Application Filing
- 2021-12-22 US US18/031,585 patent/US20230260102A1/en active Pending
- 2021-12-22 JP JP2022571529A patent/JPWO2022138685A1/ja active Pending
- 2021-12-22 CN CN202180069941.9A patent/CN116490374A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017013378A (ja) * | 2015-07-01 | 2017-01-19 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
JP2020127096A (ja) * | 2019-02-01 | 2020-08-20 | コニカミノルタ株式会社 | 検査装置、検査方法および検査プログラム |
JP2020154016A (ja) * | 2019-03-18 | 2020-09-24 | コニカミノルタ株式会社 | 画像形成装置及び画像検査方法 |
JP2020177041A (ja) * | 2019-04-15 | 2020-10-29 | キヤノン株式会社 | 画像形成装置 |
Also Published As
Publication number | Publication date |
---|---|
US20230260102A1 (en) | 2023-08-17 |
JPWO2022138685A1 (ja) | 2022-06-30 |
CN116490374A (zh) | 2023-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9065938B2 (en) | Apparatus, system, and method of inspecting image, and recording medium storing image inspection control program | |
JP6241121B2 (ja) | 画像検査装置、画像検査システム及び画像検査方法 | |
JP2022100705A (ja) | 画像処理方法、画像処理装置 | |
US11630409B2 (en) | Image processing method, image processing apparatus | |
US20220207701A1 (en) | Image processing method, image processing apparatus | |
US9973651B2 (en) | Image processing apparatus and computer-readable recording medium storing program | |
CN114666450B (zh) | 图像处理方法和图像处理装置 | |
JP5794471B2 (ja) | 制御装置、画像形成装置、及び制御方法 | |
JP2019158994A (ja) | 診断システム、診断方法、画像形成装置およびプログラム | |
JP6171730B2 (ja) | 画像検査装置、画像検査方法及び画像検査プログラム | |
JP6705305B2 (ja) | 検査装置、検査方法及びプログラム | |
JP6635335B2 (ja) | 画像検査装置及び画像形成システム | |
JP6606913B2 (ja) | 画像形成システム及び画像検査装置 | |
JP2022100689A (ja) | 画像処理方法、画像処理装置 | |
JP2022100709A (ja) | 画像処理方法、画像処理装置 | |
US11715192B2 (en) | Image processing method, image processing apparatus | |
US20220207703A1 (en) | Image processing method, image processing apparatus | |
WO2022138685A1 (ja) | 画像処理方法、画像処理装置 | |
WO2022138684A1 (ja) | 画像処理方法、画像処理装置 | |
JP5146238B2 (ja) | 画像形成装置 | |
EP3751257A1 (en) | Image clarity evaluation device and method | |
JP2016159478A (ja) | 画像処理装置およびプログラム | |
CN104079796A (zh) | 图像处理装置、图像形成装置和图像处理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21910825 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022571529 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180069941.9 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21910825 Country of ref document: EP Kind code of ref document: A1 |