CN117830315A - Real-time monitoring method and system for printing machine based on image processing - Google Patents

Real-time monitoring method and system for printing machine based on image processing Download PDF

Info

Publication number
CN117830315A
CN117830315A CN202410245686.4A CN202410245686A CN117830315A CN 117830315 A CN117830315 A CN 117830315A CN 202410245686 A CN202410245686 A CN 202410245686A CN 117830315 A CN117830315 A CN 117830315A
Authority
CN
China
Prior art keywords
image
threshold
value
gray
detection frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410245686.4A
Other languages
Chinese (zh)
Other versions
CN117830315B (en
Inventor
赵建东
杨桂荣
刘兆锋
李全芳
赵艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weinan Dadong Printing And Packaging Machinery Co ltd
Original Assignee
Weinan Dadong Printing And Packaging Machinery Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weinan Dadong Printing And Packaging Machinery Co ltd filed Critical Weinan Dadong Printing And Packaging Machinery Co ltd
Priority to CN202410245686.4A priority Critical patent/CN117830315B/en
Publication of CN117830315A publication Critical patent/CN117830315A/en
Application granted granted Critical
Publication of CN117830315B publication Critical patent/CN117830315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19107Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30144Printing quality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of image processing, in particular to a real-time monitoring method and a real-time monitoring system for a printing machine based on image processing, wherein the method comprises the following steps: processing the printed matter image, partitioning the printed matter image by using a detection frame to obtain a gray image, performing edge detection to obtain an edge image, and calculating gradient values of pixel points in a connected domain in the edge image; calculating the set-off degree of any detection frame when any gray value is used as a threshold value; clustering the set-off degree sequence and the center coordinates of the detection frame to obtain a plurality of clustering categories, and performing two-classification to obtain set-off area categories and normal printing area categories; calculating a threshold segmentation effect; taking a threshold corresponding to the maximum value of the threshold segmentation effect as an optimal segmentation threshold, carrying out image segmentation on the gray level image according to the optimal segmentation threshold, and inputting the image segmentation into a preset monitoring model to generate a defect monitoring result. The method and the device have the advantages that the self-adaptive threshold value is used for image segmentation, and the accuracy of the monitoring result is improved.

Description

Real-time monitoring method and system for printing machine based on image processing
Technical Field
The application relates to the technical field of image processing, in particular to a real-time monitoring method and system for a printing machine based on image processing.
Background
Printers are commonly used in the field of book publishing, packaging, decoration, and the like. When parts of the printer in the printer are worn or loosened, and dust exists in printing parameters or actual production environments, defects may exist in the printed matter, such as smearing of printed characters on the printed matter, so that defect monitoring is needed for the printed matter.
The prior art uses image recognition and processing technology to monitor whether a printed matter has defects, specifically, obtains a printed matter image output by a printer, preprocesses the printed matter image, sets a fixed threshold according to the characteristics and the expression form of the printed defects, and the fixed threshold is usually determined according to experience and experimental data and is used for distinguishing normal printing results from printing results with defects, and if a certain area or characteristic in the printed matter image exceeds the fixed threshold, the printed matter image is judged to have defects.
However, using existing fixed thresholds for defect monitoring, the quality and characteristics of the printed matter are often affected by a variety of factors, such as printed materials and the like. The fixed threshold cannot accommodate these variations and thus may not accurately monitor for defects under different conditions, especially for some relatively fine defects, which may be ignored, and there is a problem of low accuracy in monitoring the printed product.
Disclosure of Invention
In order to use an adaptive threshold value to perform image segmentation so as to improve accuracy of detection results, the application provides a real-time monitoring method and system for a printing machine based on image processing.
In a first aspect, the present application provides a real-time monitoring method for a printing press based on image processing, which adopts the following technical scheme:
the real-time monitoring method of the printing machine based on image processing comprises the following steps: processing the printed matter image and partitioning the printed matter image by using detection frames to obtain a gray image, wherein each detection frame is internally provided with a word;
performing edge detection on the gray level image to obtain an edge image, and calculating gradient values of pixel points in a connected domain in the edge image; calculating the set-off degree of any detection frame when any gray value is used as a threshold value, traversing all the threshold values and each detection frame to obtain a set-off degree sequence, wherein the set-off degree has a calculation formula as follows:
wherein->Indicate->Threshold value, th->The degree of smudging of the individual detection frames, +.>Indicate->Threshold value->In the detection frame->The (th) of the communicating domain>The gray value of each pixel point,indicate->Threshold value->In the detection frame->The (th) of the communicating domain>Gradient values of the pixel points, wherein the value range of the gray value is an integer from 0 to 255; clustering is carried out according to the set-off degree sequence and the center coordinates of the detection frame to obtain a plurality of clustering categories, and the clustering categories are subjected to two-classification to obtain set-off area categories and normal printing area categories; the threshold segmentation effect is calculated, and the calculation formula of the threshold segmentation effect is as follows:
wherein->Indicate->Threshold segmentation effect of the individual thresholds->The average value of gradient values of pixel points in the set-off region category is represented,gradient value mean value representing pixel points in normal printing area category, +.>Representation normalization->Representing the arrangement +.>Weight of->Represents +.>Threshold value, th->The degree of smudging of the individual detection frames, +.>For the total number of detection frames, < > is->The total number of the detection frames belonging to the set-off region category in the second classification; taking a threshold corresponding to the maximum value of the threshold segmentation effect as an optimal segmentation threshold, carrying out image segmentation on the gray level image according to the optimal segmentation threshold, and inputting the image segmentation into a preset monitoring model to generate a defect monitoring result.
Optionally, after processing the printed matter image and partitioning the printed matter image by using a detection frame, a gray image is obtained, which includes the steps of: acquiring an acquired printed matter image, and performing gray level pretreatment to obtain a pretreated gray level image; dividing the preprocessing gray scale map into a plurality of areas by using a detection frame; and respectively carrying out threshold segmentation on the image of the region in each detection frame according to different thresholds, marking the gray value of the pixel point with the gray value larger than the threshold as 255, and obtaining the gray image without changing the gray value of the pixel point with the gray value smaller than or equal to the threshold.
Optionally, dividing the preprocessed gray map into a plurality of regions using a detection frame includes: corresponding detection frames are set according to the word size data of the single word and the central coordinates of the word areas, and all the printed words are traversed to obtain a plurality of detection frames.
Optionally, the weight calculating method includes: and taking a pixel point area with the gray value of the pixel point in the optional detection frame not being 255 as a target area, acquiring the ratio of the gray value average value of all the pixel points in the target area to the gray value average value of the pixel points in the normal printing area, and normalizing the ratio to obtain the weight of the optional detection frame.
Optionally, in the calculation formula of the threshold segmentation effect,the value range of (2) is +.>,/>The value range of (2) is +.>
In a second aspect, the present application provides a real-time monitoring system for a printing press based on image processing, which adopts the following technical scheme:
a printer real-time monitoring system based on image processing, comprising: a processor and a memory storing computer program instructions which, when executed by the processor, implement a method for real-time monitoring of a printing press according to image processing.
The application has the following technical effects:
1. calculating the set-off degree of any detection frame when any gray value is used as a threshold value, traversing all the threshold values and each detection frame to obtain a set-off degree sequence, classifying the set-off sequence, calculating a threshold value segmentation effect, taking a threshold value corresponding to the maximum value of the threshold value segmentation effect as an optimal segmentation threshold value, and performing image segmentation according to the optimal segmentation threshold value to adaptively find the optimal segmentation threshold value capable of containing all printing characteristics so as to optimize defect monitoring in a printed product.
2. The acquired printed matter images are partitioned by using the detection frames, each detection frame contains a printed word, after the images are divided into a plurality of detection frames, the images in each detection frame can be locally processed, the processing range is reduced, the calculated amount is reduced, and the images in each detection frame can be processed in parallel, so that the calculation time is reduced.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, several embodiments of the present application are shown by way of example and not by way of limitation, and identical or corresponding reference numerals indicate identical or corresponding parts.
Fig. 1 is a flowchart of a method for real-time monitoring of a printing press based on image processing according to an embodiment of the application.
Fig. 2 is a flowchart of a method of step S1 in a real-time monitoring method of a printing press based on image processing according to an embodiment of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be understood that when the terms "first," "second," and the like are used in the claims, specification, and drawings of this application, they are used merely for distinguishing between different objects and not for describing a particular sequential order. The terms "comprises" and "comprising," when used in the specification and claims of this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The embodiment of the application discloses a real-time monitoring method of a printing machine based on image processing, which is used for finding out printed matters with smear defects by monitoring the printing effect of the printed matters in real time on a spot inspection table of the printed matters on a production line with an application scene of printing. Referring to fig. 1, the method includes steps S1 to S6, specifically as follows:
s1: after the printed matter image is processed and partitioned by using detection frames, a gray image is obtained, wherein each detection frame is provided with a word. Referring to fig. 2, step S1 includes steps S10 to S12, specifically as follows:
s10: and obtaining a preprocessing gray level image after gray level preprocessing is carried out on the acquired printed matter image.
The camera is arranged to obtain a nodding image of the printed matter, the image of the printed matter is obtained, the ink on the image of the printed matter is in a normal state, the ink coverage of the printed font is uniform, the character edge is clear, and the phenomena of blurring, halation and the like are avoided. And carrying out graying pretreatment on the printed matter image to obtain a pretreated gray scale image. The printed image is an RGB (Red, green, blue, red, green, blue) image having three color channels, and in the pre-processing gray map, each pixel contains only one color channel, the gray value in the gray map represents the brightness or color depth of the pixel, between 0 and 255, 0 represents the darkest pixel value (black), 255 represents the brightest pixel value (white), and a value between 0 and 255 represents different degrees of gray, and processing the RGB image into the gray map can reduce the calculation amount of subsequent processing.
S11: the pre-processed gray map is divided into a plurality of regions using a detection frame.
Obtaining character size data of characters and central coordinates of a character area, setting a detection frame as a rectangle, enabling the coordinates of the geometric center of the detection frame to coincide with the central coordinates of the character area, setting the size of the detection frame according to the character size of the characters, enabling the detection frame to cover the area where one character is located, enabling fonts of different character sizes to correspond to the detection frames of different sizes, enabling the detection frame with large fonts to be large in size, and enabling the detection frame with small fonts to be small in size.
Setting corresponding detection frames according to the character size data of the single characters and the central coordinates of the character areas, traversing all the printed characters to obtain a plurality of detection frames, wherein the number of the detection frames is equal to the number of preset printing fonts when the current paper is subjected to ink printing.
S12: and respectively carrying out threshold segmentation on the image of the region in each detection frame according to different thresholds, marking the gray value of the pixel point with the gray value larger than the threshold as 255, and obtaining the gray image without changing the gray value of the pixel point with the gray value smaller than or equal to the threshold.
S2: and carrying out edge detection on the gray level image to obtain an edge image, and calculating gradient values of pixel points in a connected domain in the edge image.
And carrying out edge detection on a pixel point region with the gray value of the pixel point not being 255 in any detection frame by using a canny operator to obtain an edge detection result. The edges are light-dark junctions, when the smear region exists, the smear region is obtained by rubbing a normal ink printing region with other printing paper, the ink marks of the smear region are lighter than the ink marks of normal printing, and obvious gray level change exists near the edges.
According to the direction from high to low of the gray value on the edge, expanding the edge by using the expansion operation in the existing graphics to obtain an expanded result graph, and further obtaining the data information on the expanded edge, thereby avoiding errors caused by the external data of the edge.
The extended range of a pixel point is an eight-neighborhood region of the pixel point, wherein the eight-neighborhood region refers to a neighborhood formed by 8 surrounding pixel points of the pixel point. Specifically, adjacent pixel points in the up, down, left, right and four diagonal directions of the pixel point are included. And performing dot multiplication operation on the edge detection result and the expansion result graph to obtain an image serving as an edge image.
In the edge imageAll pixels in the connected domain are utilized +.>The operator carries out gradient value calculation to obtain the +.>The (th) of the communicating domain>Gradient values of individual pixels, using +.>The gradient value calculation performed by the operator is in the prior art, and will not be described herein.
S3: calculating the set-off degree of any detection frame when any gray value is used as a threshold value, traversing all the threshold values and each detection frame, and obtaining a set-off degree sequence.
The calculation formula of the set-off degree is:wherein->Indicate->Threshold value, th->The degree of smudging of the individual detection frames, +.>Indicate->Threshold value->In the detection frame->The (th) of the communicating domain>Gray value of each pixel, +.>Indicate->Threshold value->In the detection frame->The (th) of the communicating domain>Gradient values for individual pixels.
And taking a pixel point area with the gray value of the pixel point in the optional detection frame not being 255 as a target area, acquiring the ratio of the gray value average value of all the pixel points in the target area to the gray value average value of the pixel points in the normal printing area, and normalizing the ratio to obtain the weight of the optional detection frame.
The higher the gray value is, the smaller the gradient value is, and at the moment, the more likely the pixel points belong to the set-off region, and the purpose of solving the maximum value of the gradient value product after the mapping of the gray value and the negative correlation is to find the pixel point which can most represent whether the connected domain contains the set-off region or not. Namely the firstIn the improved gray level image of the threshold value, the larger the set-off degree value is, the more the set-off region exists in the detection frame.
S4: clustering is carried out according to the set-off degree sequence and the center coordinates of the detection frame to obtain a plurality of clustering categories, and the clustering categories are subjected to two-classification to obtain set-off area categories and normal printing area categories.
And (3) carrying out data classification by using a DBSCAN (Density-based spatial clustering) algorithm according to the set-off degree sequence and the center coordinates of the detection frame, and obtaining a plurality of categories after classification, wherein the set-off degree contained in each category has approximate values, and the center coordinates of the detection frame are approximate. DBSCAN clustering is the prior art and is not described in detail herein.
The clustering class obtained by DBSCAN clustering is three-dimensional data, the average value of the set-off degree of a detection frame in the clustering class is used as input of two classifications, and a classification result, namely two sets belonging to the set-off region class and the normal printing region class, is output according to the size of the average value of the set-off degree as a classification condition. The two sets are respectively marked as,/>Will->,/>All data are taken and normalized, wherein +.>For the set-off degree value sequence corresponding to the set-off region, < > for>The total number of detection frames belonging to the set-off area category in the two categories is +.>A set of smear level values corresponding to the normal printing area, < > for the normal printing area>The total number of the detection frames belonging to the normal printing area category in the two categories is set.
The method adopts a trained logistic regression model to carry out two classification, and the training process is as follows: randomly initializing parameters of a logistic regression model, including a weight matrix and a bias term, inputting coordinates and a dirty degree of training data, marking on clustering data of images by a user, updating the weight matrix and the bias term until the model converges, outputting a data classification result of whether the model belongs to a dirty region cluster, and obtaining a loss function in model training as a cross entropy loss function.
S5: and calculating a threshold segmentation effect.
The calculation formula of the threshold segmentation effect is as follows:
wherein->Indicate->Threshold segmentation effect of the individual thresholds->The average value of gradient values of pixel points in the set-off region category is represented,gradient value mean value representing pixel points in normal printing area category, +.>Representation normalization->The larger the size of the container,smaller (less)>Smaller (less)>The bigger the->The larger represents the current->The gray threshold value indicates that the more serious the smear in the detection frame is during image segmentation, the required +.>The more stable the image segmentation effect +.>The more efficient, the more advantageous the image segmentation.
Representing the arrangement +.>Weight of->The larger the importance representing the degree of smearing of the corresponding detection frame is, the stronger.
Represents +.>Threshold value, th->The degree of smudging of the individual detection frames, +.>For the total number of detection frames, < > is->The total number of the detection frames belonging to the set-off region category in the two categories is set. />The value range of (2) is +.>,/>The value range of (2) is +.>
The larger the image segmentation effect, the better the current image segmentation effect, and the more complete the corresponding segmented set-off area and the printing area.
S6: taking a threshold corresponding to the maximum value of the threshold segmentation effect as an optimal segmentation threshold, carrying out image segmentation on the gray level image according to the optimal segmentation threshold, and inputting the image segmentation into a preset monitoring model to generate a defect monitoring result.
Acquisition ofThe corresponding partition threshold when the maximum value is taken is marked as the optimal partition threshold +.>. Gray threshold +.>Substituting the image into the initial printed matter image to perform image segmentation to obtain an optimal image segmentation result.
The monitoring model adopts a trained deep learning model, such as a convolutional neural network model, sample data are set and labeled, the sample data contains a printed matter image of a set-off region, the label is 1, the sample data does not contain a printed matter image of the set-off region, the label is 0, the sample data with the label is input into the deep learning model for training until a loss function reaches a preset loss function value of 0.01 or training times reach 100 times, and training is completed. The loss function in model training is a cross entropy loss function.
After the print image is acquired, an optimal image segmentation threshold is adaptively found and used for image segmentation. Inputting the image after image segmentation into a monitoring model, detecting the paper with the set-off defect, regarding the paper as printing errors, automatically discharging the paper with the set-off defect out of the sampling inspection table, ensuring that the defective printed sheet does not participate in the subsequent flow, and ensuring the accuracy of printing of the paper ink so as to achieve real-time monitoring of printing products of the printing machine.
The embodiment of the application also discloses a printer real-time monitoring system based on image processing, which comprises a processor and a memory, wherein the memory stores computer program instructions, and the printer real-time monitoring method based on image processing according to the application is realized when the computer program instructions are executed by the processor.
The above system further comprises other components well known to those skilled in the art, such as a communication bus and a communication interface, the arrangement and function of which are known in the art and therefore are not described in detail herein.
In the context of this application, the foregoing memory may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, the computer-readable storage medium may be any suitable magnetic or magneto-optical storage medium, such as, for example, resistance change memory RRAM (Resistive Random Access Memory), dynamic random access memory DRAM (Dynamic Random Access Memory), static random access memory SRAM (Static Random Access Memory), enhanced dynamic random access memory EDRAM (Enhanced Dynamic Random Access Memory), high bandwidth memory HBM (High Bandwidth Memory), hybrid storage cube HMC (Hybrid Memory Cube), or the like, or any other medium that can be used to store the desired information and that can be accessed by an application, a module, or both.
While various embodiments of the present application have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and spirit of the application. It should be understood that various alternatives to the embodiments of the present application described herein may be employed in practicing the application.
The foregoing are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in any way, therefore: all equivalent changes in structure, shape and principle of this application should be covered in the protection scope of this application.

Claims (6)

1. The real-time monitoring method of the printing machine based on the image processing is characterized by comprising the following steps:
processing the printed matter image and partitioning the printed matter image by using detection frames to obtain a gray image, wherein each detection frame is internally provided with a word;
performing edge detection on the gray level image to obtain an edge image, and calculating gradient values of pixel points in a connected domain in the edge image;
calculating the set-off degree of any detection frame when any gray value is used as a threshold value, traversing all the threshold values and each detection frame to obtain a set-off degree sequence, wherein the set-off degree has a calculation formula as follows:
wherein->Indicate->Threshold value, th->The degree of smudging of the individual detection frames, +.>Indicate->Threshold value->In the detection frame->The (th) of the communicating domain>The gray value of each pixel point,indicate->Threshold value->In the detection frame->The (th) of the communicating domain>Gradient values of the pixel points, wherein the value range of the gray value is an integer from 0 to 255;
clustering is carried out according to the set-off degree sequence and the center coordinates of the detection frame to obtain a plurality of clustering categories, and the clustering categories are subjected to two-classification to obtain set-off area categories and normal printing area categories;
the threshold segmentation effect is calculated, and the calculation formula of the threshold segmentation effect is as follows:
wherein->Indicate->Threshold segmentation effect of the individual thresholds->Gradient value mean value representing pixel points in the set-off region class, < >>Gradient value mean value representing pixel points in normal printing area category, +.>Representation normalization->Representing the arrangement +.>Weight of->Represents +.>Threshold value, th->The degree of smudging of the individual detection frames, +.>For the total number of detection frames, < > is->The total number of the detection frames belonging to the set-off region category in the second classification;
taking a threshold corresponding to the maximum value of the threshold segmentation effect as an optimal segmentation threshold, carrying out image segmentation on the gray level image according to the optimal segmentation threshold, and inputting the image segmentation into a preset monitoring model to generate a defect monitoring result.
2. The image processing-based real-time monitoring method for a printing press according to claim 1, wherein after processing the printed matter image and partitioning using a detection frame, a gray image is obtained, comprising the steps of:
acquiring an acquired printed matter image, and performing gray level pretreatment to obtain a pretreated gray level image;
dividing the preprocessing gray scale map into a plurality of areas by using a detection frame;
and respectively carrying out threshold segmentation on the image of the region in each detection frame according to different thresholds, marking the gray value of the pixel point with the gray value larger than the threshold as 255, and obtaining the gray image without changing the gray value of the pixel point with the gray value smaller than or equal to the threshold.
3. The image processing-based printer real-time monitoring method according to claim 2, wherein dividing the pre-processing gray scale map into a plurality of areas using a detection frame, comprises: corresponding detection frames are set according to the word size data of the single word and the central coordinates of the word areas, and all the printed words are traversed to obtain a plurality of detection frames.
4. The image processing-based real-time monitoring method for a printing press according to claim 2, wherein the weight calculating method is as follows: and taking a pixel point area with the gray value of the pixel point in the optional detection frame not being 255 as a target area, acquiring the ratio of the gray value average value of all the pixel points in the target area to the gray value average value of the pixel points in the normal printing area, and normalizing the ratio to obtain the weight of the optional detection frame.
5. The method for real-time monitoring of a printing press based on image processing according to claim 1, wherein in the calculation formula of the threshold segmentation effect,the value range of (2) is +.>,/>The value range of (2) is +.>
6. Printing machine real-time monitoring system based on image processing, characterized by comprising: a processor and a memory storing computer program instructions which, when executed by the processor, implement the image processing based printer real time monitoring method according to any one of claims 1 to 5.
CN202410245686.4A 2024-03-05 2024-03-05 Real-time monitoring method and system for printing machine based on image processing Active CN117830315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410245686.4A CN117830315B (en) 2024-03-05 2024-03-05 Real-time monitoring method and system for printing machine based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410245686.4A CN117830315B (en) 2024-03-05 2024-03-05 Real-time monitoring method and system for printing machine based on image processing

Publications (2)

Publication Number Publication Date
CN117830315A true CN117830315A (en) 2024-04-05
CN117830315B CN117830315B (en) 2024-05-10

Family

ID=90519366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410245686.4A Active CN117830315B (en) 2024-03-05 2024-03-05 Real-time monitoring method and system for printing machine based on image processing

Country Status (1)

Country Link
CN (1) CN117830315B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118071735A (en) * 2024-04-16 2024-05-24 深圳勤本电子有限公司 Liquid leakage detection method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04302375A (en) * 1991-03-29 1992-10-26 Eastman Kodak Japan Kk Image binarizing device
JPH06201611A (en) * 1993-01-08 1994-07-22 Datsuku Eng Kk Method for detecting defect of sheetlike printed matter
CN101799434A (en) * 2010-03-15 2010-08-11 深圳市中钞科信金融科技有限公司 Printing image defect detection method
CN207489095U (en) * 2017-12-08 2018-06-12 天津市焕彩印刷有限公司 A kind of press quality amount detecting device
WO2022105623A1 (en) * 2020-11-23 2022-05-27 西安科锐盛创新科技有限公司 Intracranial vascular focus recognition method based on transfer learning
US20220247883A1 (en) * 2021-02-03 2022-08-04 Kyocera Document Solutions Inc. Image Forming Apparatus, Image Forming Method, and Recording Medium
CN116503394A (en) * 2023-06-26 2023-07-28 济南奥盛包装科技有限公司 Printed product surface roughness detection method based on image
EP4231257A1 (en) * 2022-02-22 2023-08-23 European Central Bank Test deck of artificially soiled banknotes, manufacturing process and uses thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04302375A (en) * 1991-03-29 1992-10-26 Eastman Kodak Japan Kk Image binarizing device
JPH06201611A (en) * 1993-01-08 1994-07-22 Datsuku Eng Kk Method for detecting defect of sheetlike printed matter
CN101799434A (en) * 2010-03-15 2010-08-11 深圳市中钞科信金融科技有限公司 Printing image defect detection method
CN207489095U (en) * 2017-12-08 2018-06-12 天津市焕彩印刷有限公司 A kind of press quality amount detecting device
WO2022105623A1 (en) * 2020-11-23 2022-05-27 西安科锐盛创新科技有限公司 Intracranial vascular focus recognition method based on transfer learning
US20220247883A1 (en) * 2021-02-03 2022-08-04 Kyocera Document Solutions Inc. Image Forming Apparatus, Image Forming Method, and Recording Medium
EP4231257A1 (en) * 2022-02-22 2023-08-23 European Central Bank Test deck of artificially soiled banknotes, manufacturing process and uses thereof
CN116503394A (en) * 2023-06-26 2023-07-28 济南奥盛包装科技有限公司 Printed product surface roughness detection method based on image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118071735A (en) * 2024-04-16 2024-05-24 深圳勤本电子有限公司 Liquid leakage detection method and system
CN118071735B (en) * 2024-04-16 2024-07-02 深圳勤本电子有限公司 Liquid leakage detection method and system

Also Published As

Publication number Publication date
CN117830315B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN117830315B (en) Real-time monitoring method and system for printing machine based on image processing
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
JP5008572B2 (en) Image processing method, image processing apparatus, and computer-readable medium
CN109919149B (en) Object labeling method and related equipment based on object detection model
CN114926839B (en) Image identification method based on RPA and AI and electronic equipment
CN111797766B (en) Identification method, identification device, computer-readable storage medium, and vehicle
CN111680690A (en) Character recognition method and device
CN112598627A (en) Method, system, electronic device and medium for detecting image defects
CN113063802B (en) Method and device for detecting defects of printed labels
CN111126391A (en) Method for positioning defects of printed characters
CN117423126A (en) Bill image-text recognition method and system based on data analysis
CN113392819B (en) Batch academic image automatic segmentation and labeling device and method
CN117576106A (en) Pipeline defect detection method and system
CN114040116B (en) Plastic mould good product monitoring feedback system
Wang et al. Local defect detection and print quality assessment
CN116703899B (en) Bag type packaging machine product quality detection method based on image data
CN110322466B (en) Supervised image segmentation method based on multi-layer region limitation
CN113256644A (en) Bill image segmentation method, device, medium, and apparatus
CN109993171B (en) License plate character segmentation method based on multiple templates and multiple proportions
CN115601760A (en) Defect evaluation method for first flexo printing piece
CN110533098B (en) Method for identifying loading type of green traffic vehicle compartment based on convolutional neural network
CN113610838A (en) Bolt defect data set expansion method
CN112883977A (en) License plate recognition method and device, electronic equipment and storage medium
CN113177602A (en) Image classification method and device, electronic equipment and storage medium
CN118334674B (en) Automatic identification method and system for document shooting image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant