CN117934456A - Packaging box printing quality detection method based on image processing - Google Patents

Packaging box printing quality detection method based on image processing Download PDF

Info

Publication number
CN117934456A
CN117934456A CN202410316641.1A CN202410316641A CN117934456A CN 117934456 A CN117934456 A CN 117934456A CN 202410316641 A CN202410316641 A CN 202410316641A CN 117934456 A CN117934456 A CN 117934456A
Authority
CN
China
Prior art keywords
area
region
foreground
areas
printed image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410316641.1A
Other languages
Chinese (zh)
Other versions
CN117934456B (en
Inventor
杨思侠
林桐
杨世发
董世贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Jianfeng Printing Co ltd
Original Assignee
Dalian Jianfeng Printing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Jianfeng Printing Co ltd filed Critical Dalian Jianfeng Printing Co ltd
Priority to CN202410316641.1A priority Critical patent/CN117934456B/en
Publication of CN117934456A publication Critical patent/CN117934456A/en
Application granted granted Critical
Publication of CN117934456B publication Critical patent/CN117934456B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a packaging box printing quality detection method based on image processing, which comprises the following steps: closing edge lines in the printed image to divide a background area and a foreground area of the target area; obtaining abnormal credibility according to the areas of the foreground area and the background area and the gray value distribution condition of the pixel points, dividing all target areas into a first area and a second area, weighting a segmentation threshold value of the first area by using the abnormal credibility of the first area, obtaining an optimal segmentation threshold value, re-obtaining an updated foreground area and an updated background area of the second area, obtaining the defect degree of a printing image according to the gray value difference of the pixel points in the foreground area and the background area and the area of the foreground area, and detecting the printing quality of the packaging box. The invention improves the segmentation effect on the printed image and further improves the accuracy of the detection result of the printing quality of the packaging box.

Description

Packaging box printing quality detection method based on image processing
Technical Field
The invention relates to the technical field of image data processing, in particular to a packaging box printing quality detection method based on image processing.
Background
In the process of printing the packaging box, the surface of the printing roller shaft is possibly residual crystals or the ink layer of the crystals is uneven, so that the defects of uncolored or uneven coloring appear in certain areas in the image, and the first area and the normal area have color differences, so that the printing quality of the surface of the packaging box is detected by a common threshold segmentation algorithm, and the printing quality of the current packaging box is evaluated according to the threshold segmentation result.
However, when the printing quality of the surface of the packaging box is detected by using a traditional threshold segmentation algorithm, the coloring is not complete due to the defect, and gray features displayed by partial first areas may be similar to other areas in the current image, so that the threshold segmentation result is not obvious after the whole image is subjected to threshold segmentation, the first areas in the segmentation result are difficult to accurately distinguish, and the accuracy of detecting the printing quality of the packaging box is further low.
Disclosure of Invention
The invention provides a packaging box printing quality detection method based on image processing, which aims to solve the existing problems.
The invention relates to a packaging box printing quality detection method based on image processing, which adopts the following technical scheme:
An embodiment of the present invention provides a method for detecting print quality of a package based on image processing, including the steps of:
acquiring an RGB printing image on the printed surface of the packaging box and a grey printing image;
Acquiring gradient amplitude values of edge lines in a printed image and pixel points in an RGB printed image, closing the edge lines which do not form a closed region according to gradient amplitude difference between the edge pixel points at the upper end points of the edge lines which do not form the closed region and adjacent pixel points, marking the closed region formed by each edge line in the printed image as a target region, acquiring a segmentation threshold value of each target region, and segmenting a background region and a foreground region of each target region;
Obtaining defect expression degrees of the target areas according to the area differences of the foreground areas and the background areas in the target areas and the gray value distribution conditions of the pixel points, fusing the defect expression degrees and the areas of the target areas to obtain abnormal credibility of the target areas, dividing all the target areas into a first area and a second area according to the abnormal credibility, weighting a segmentation threshold of the first area by using the abnormal credibility of the first area in a printing image to obtain an optimal segmentation threshold of the printing image, obtaining an updated foreground area and an updated background area of the second area by using the optimal segmentation threshold, and obtaining the defect degree of the printing image according to the gray value differences of the pixel points in the foreground areas and the background areas of the first area and the updated foreground areas and the updated background areas of the second area;
And detecting the printing quality of the packaging box according to the defect degree of the printed image.
Further, the method for obtaining the gradient amplitude of the edge line and the pixel point in the printed image by performing edge detection on the printed image comprises the following specific steps:
Performing edge detection on the printed image by using a Canny algorithm to obtain an edge image of the printed image, and obtaining edge lines in the edge image, which are not formed with a closed area, and marking the edge lines as first edge lines;
And (3) acquiring gradient amplitude values of any pixel point in the printed image under an R channel, a G channel and a B channel of the RGB printed image respectively by utilizing a Sobel operator.
Further, the method for closing the edge line without the closed region according to the gradient amplitude difference between the edge pixel point and the adjacent pixel point at the upper end point of the edge line without the closed region comprises the following specific steps:
Marking 8 neighborhood pixel points of any one edge pixel point in the edge image as neighborhood pixel points of the edge pixel points;
and starting from the end point of any first edge line, acquiring gradient difference parameters between an edge pixel point at any one end point of the first edge line and any one neighborhood pixel point of the edge pixel point at the end point, when the gradient difference parameters between the edge pixel point and the edge pixel point at the end point are larger than a preset first threshold value, taking the corresponding neighborhood pixel point as the edge pixel point of the first edge line, acquiring the corresponding edge pixel point at the end point of the first edge line again, and so on, so that the first edge line forms a closed area.
Further, the specific calculation method of the gradient difference parameter comprises the following steps:
wherein, Represents an edge pixel point at any one end point of the first edge line and the/>, of the edge pixel pointGradient difference parameters between the adjacent pixel points; /(I)、/>And/>Representing gradient amplitude values of edge pixel points at any one end point of the first edge line under the R channel, the G channel and the B channel respectively; /(I)、/>And/>First/>, representing edge pixel point at any one end point of first edge lineGradient amplitude values of each neighborhood pixel point under the R channel, the G channel and the B channel respectively; /(I)Representing absolute value symbols.
Further, the specific method for obtaining the segmentation threshold value of each target region and segmenting the background region and the foreground region of each target region includes the following steps:
obtaining a segmentation threshold value of each target area by using an Ojin method, obtaining the number of corresponding pixel points of each gray value in any target in the target area, and marking the pixel point with the largest number of corresponding pixel points of each gray value in the target area as a first pixel point;
When the gray value of the first pixel points is larger than the segmentation threshold value of the target area, the areas formed by all the first pixel points in the target area are marked as background areas of the target area, and other areas except the background areas are foreground areas;
when the gray value of the first pixel points is smaller than or equal to the segmentation threshold value of the target area, the areas formed by all the first pixel points in the target area are marked as foreground areas of the target area, and the areas outside the foreground areas are marked as background areas.
Further, the method for obtaining the defect expression degree of the target area according to the area difference between the foreground area and the background area in the target area and the gray value distribution condition of the pixel points comprises the following specific steps:
the area ratio of the foreground area to the background area of the target area is marked as a first ratio; taking one gray level as one gray level, and acquiring the number of gray levels in a target area and the information entropy of the gray levels of all pixel points in the target area;
Recording the cumulative multiplication result of the information entropy of the first ratio, the number of gray levels in the target area and the gray values of all pixel points in the target area as the foreground confusion degree of the target area;
The absolute value of the difference between the minimum gray value of all the pixel points in the foreground area of the target area and the segmentation threshold value of the target area is recorded as a first difference value, and the absolute value of the difference between the maximum gray value of all the pixel points in the background area of the target area and the segmentation threshold value of the target area is recorded as a second difference value; marking the sum of the first difference value and the second difference value as a partition coefficient of the target area;
and recording the foreground confusion degree of the target area, the dividing coefficient and the cumulative multiplication result of the variances of the gray values of all pixel points in the foreground area of the target area as the defect expression degree of the target area.
Further, the method for obtaining the abnormal credibility of the target area by fusing the defect expression degree and the area of each target area includes the following steps:
Multiplying the defect expression degree of the target area by the area of the target area, marking the multiplication result as a credibility factor, carrying out linear normalization on the credibility factors of all the target areas, marking the normalization result as abnormal credibility of the corresponding target area, marking the target area with the abnormal credibility larger than or equal to a preset credibility threshold value as a first area, and marking the target area outside the first area as a second area.
Further, the method for weighting the segmentation threshold of the first region by using the abnormal reliability of the first region in the printed image to obtain the optimal segmentation threshold of the printed image, and obtaining the updated foreground region and the updated background region of the second region by using the optimal segmentation threshold comprises the following specific steps:
the specific calculation method of the optimal segmentation threshold value of the printed image comprises the following steps:
wherein, Representing an optimal segmentation threshold for the printed image; /(I)Representing the number of first areas in the printed image; /(I)Representing the/>, in the printed imageAbnormal credibility of the first areas; /(I)Representing the/>, in the printed imageA segmentation threshold for the first region;
And taking the optimal segmentation threshold of the printed image as the segmentation threshold of a second area in the printed image, and acquiring an updated background area and an updated foreground area in the second area, wherein the acquisition methods of the updated background area and the updated foreground area in the second area are the same as the acquisition methods of the background area and the foreground area of the target area.
Further, the specific method for obtaining the defect degree of the printed image according to the difference of gray values of pixel points in the foreground region and the background region of the first region, the updated foreground region and the updated background region of the second region, and the areas of the foreground region and the updated foreground region includes the following steps:
the first region and the second region are collectively referred to as a third region; the foreground region of the first region and the updated foreground region of the second region are collectively referred to as a final foreground region, and the background region of the first region and the updated background region of the second region are collectively referred to as a final background region;
the specific calculation method of the defect degree of the printed image comprises the following steps:
wherein, Representing the defect level of the printed image; /(I)Representing the number of third areas in the printed image; /(I)Representing the/>, in the printed imageThe area of the final foreground region of the third region; /(I)Representing the/>, in the printed imageThe area of the third region; And/> Respectively represent the/>, in the printed imageThe gray value average value of all pixel points in the final foreground area and the final background area of the third area; /(I)Representing absolute value symbols; /(I)Representing a sigmoid normalization function.
Further, the method for detecting the printing quality of the packaging box according to the defect degree of the printed image comprises the following specific steps:
When the defect degree of the printed image is larger than a preset defect degree threshold, marking the packaging box corresponding to the printed image as unqualified in printing, and discarding and recycling.
The technical scheme of the invention has the beneficial effects that: the method comprises the steps of dividing the area of a printed image through edge detection, dividing the area corresponding to different patterns in the printed image to obtain a plurality of target areas, threshold segmentation is carried out on each target area, the segmentation threshold of the first area is weighted by the aid of abnormal credibility of the first area to obtain the optimal segmentation threshold of the printed image, the second area is segmented again, segmentation effect on the area with the non-ideal threshold segmentation effect in the printed image is improved, and accuracy of the detection result of the printing quality of the packaging box is further improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart showing the steps of a method for detecting the printing quality of a packaging box based on image processing;
FIG. 2 is a schematic view of a printed image provided by a method for detecting print quality of a package based on image processing according to the present invention;
Fig. 3 is a schematic diagram of an edge image of a printed image provided by the method for detecting the print quality of a package box based on image processing according to the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects of the method for detecting the printing quality of the packaging box based on image processing according to the invention with reference to the attached drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the method for detecting the printing quality of the packaging box based on image processing.
Referring to fig. 1, a flowchart of a method for detecting printing quality of a packaging box based on image processing according to an embodiment of the present invention is shown, the method includes the following steps:
step S001: and acquiring an RGB printed image of the printed surface of the packaging box and a grey-scaled printed image.
After the surface of the packaging box is printed, when the printing defect with low coloring uniformity appears, the gray distribution of the defect on the surface of the packaging box is complex, when the obtained image is subjected to threshold segmentation by a segmentation algorithm, the final segmentation effect is influenced by the segmentation threshold, and the problems of undersegmentation and oversegmentation can be caused by too small segmentation threshold and too large segmentation threshold respectively, so that the threshold is accurately segmented according to the image acquisition, the segmentation effect of the image is improved, and the accuracy of the printing quality detection of the packaging box is further improved.
Specifically, in order to implement the method for detecting the printing quality of the packaging box based on image processing provided in this embodiment, a printed image needs to be collected first, and the specific process is as follows:
Under uniform illumination, the image of the surface of the packaging box after printing is collected by a camera and recorded as an RGB printing image, and the RGB printing image is subjected to gray processing to obtain a printing image, and the printing image is shown in a schematic diagram of the printing image in FIG. 2.
To this end, a printed image is obtained by the above method.
Step S002: acquiring gradient amplitude values of edge lines in a printed image and pixel points in an RGB printed image, closing the edge lines which do not form a closed region according to gradient amplitude difference between the edge pixel points at the upper end points of the edge lines which do not form the closed region and adjacent pixel points, marking the closed region formed by each edge line in the printed image as a target region, acquiring a segmentation threshold value of each target region, and segmenting a background region and a foreground region of each target region.
It should be noted that, because the print quality problem is represented by the existence of discrete pixels in different pattern areas in the image, in order to make the features of the first area in the pattern after the threshold segmentation easier to be identified, the different pattern areas in the image are first segmented, so as to determine the segmentation threshold for the features of the different areas. When the pixel point in one region in the segmentation result shows greater defect expression degree, the region has greater defect possibility, the region has higher abnormal reliability, the region with lower reliability is discarded, the region with lower reliability is processed according to the segmentation threshold value of the region with higher reliability, and the defect degree of the image is determined according to the segmentation result.
The first area printed on the image exists in the pattern on the surface of the packaging box, obvious edge features are arranged between different pattern areas, the pattern is segmented by utilizing edge detection, different areas in the image are segmented, and the optimal segmentation threshold value is self-adaptive according to gray features. Because the color difference between the first area in the partial pattern and other patterns is small, if the whole image is segmented by using the threshold segmentation algorithm, the segmentation of the partial area image may be incomplete or the segmentation result is difficult to observe.
Specifically, step (2.1), firstly, performing edge detection on a printed image by using a Canny algorithm to obtain an edge image of the printed image, and as shown in fig. 3, obtaining edge lines which do not form a closed area in the edge image, marking the edge lines as first edge lines, and obtaining gradient amplitudes of any pixel point in the printed image under an R channel, a G channel and a B channel of the RGB printed image by using a Sobel operator; and marking 8 neighborhood pixel points of any one edge pixel point in the edge image as neighborhood pixel points of the edge pixel point.
It should be noted that, the Chinese names of the Canny algorithm and the Sobel operator are respectively a Canny algorithm and a Sobel operator, and the Canny algorithm and the Sobel operator are both existing algorithms, so that the embodiment is not repeated.
Then, starting from the end point of any first edge line, acquiring gradient difference parameters between the corresponding edge pixel point at the end point of the first edge line and any one neighborhood pixel point, when the gradient difference parameters between the corresponding edge pixel point at the end point and the corresponding neighborhood pixel point are larger than a preset first threshold value, taking the corresponding neighborhood pixel point as the edge pixel point of the first edge line, acquiring the corresponding edge pixel point at the end point of the first edge line again, and so on, closing the first edge line to form a region.
The specific calculation method of the gradient difference parameter between the corresponding edge pixel point at the end point of the first edge line and any one of the neighborhood pixel points comprises the following steps:
wherein, Represents an edge pixel point at any one end point of the first edge line and the/>, of the edge pixel pointGradient difference parameters between the adjacent pixel points; /(I)、/>And/>Representing gradient amplitude values of edge pixel points at any one end point of the first edge line under the R channel, the G channel and the B channel respectively; /(I)、/>And/>First/>, representing edge pixel point at any one end point of first edge lineGradient amplitude values of each neighborhood pixel point under the R channel, the G channel and the B channel respectively; /(I)Representing absolute value symbols.
It should be noted that, the preset first threshold is 0.6, and may be adjusted according to actual situations, and the embodiment is not specifically limited.
Step (2.2), acquiring areas surrounded by all edge lines in a printed image, marking the areas as target areas, acquiring a segmentation threshold value of each target area by using an Ojin method, acquiring the number of corresponding pixel points of each gray value in the target area in any target, and marking the pixel point with the largest number of corresponding pixel points of each gray value in the target area as a first pixel point; when the gray value of the first pixel points is larger than the segmentation threshold value of the target area, the areas formed by all the first pixel points in the target area are marked as background areas of the target area, and other areas except the background areas are foreground areas; when the gray value of the first pixel points is smaller than or equal to the segmentation threshold value of the target area, the areas formed by all the first pixel points in the target area are marked as foreground areas of the target area, and the areas outside the foreground areas are marked as background areas.
Note that, since the oxford method is an existing threshold segmentation algorithm, the description of this embodiment is omitted.
So far, the background area and the foreground area in each target area are obtained through the method.
Step S003: obtaining defect expression degree of the target area according to the area difference of the foreground area and the background area in the target area and the gray value distribution condition of the pixel points, fusing the defect expression degree and the area of each target area to obtain abnormal credibility of the target area, dividing all the target areas into a first area and a second area according to the abnormal credibility, weighting a segmentation threshold of the first area by using the abnormal credibility of the first area in the printed image to obtain an optimal segmentation threshold of the printed image, obtaining an updated foreground area and an updated background area of the second area by using the optimal segmentation threshold, and obtaining the defect degree of the printed image according to the gray value difference of the pixel points in the foreground area and the background area of the first area and the updated foreground area and the updated background area of the second area.
The difference in the defect expression level is found in the different pattern areas obtained by division, and the better the division effect of the division threshold is in the area with better defect expression level, whereas the worse the division effect of the division threshold is in the area with worse defect expression level, and meanwhile, the larger the disturbance degree of the foreground in the area is, the better the defect expression level is. Therefore, there is a different possibility that defects exist for different regions, and the reliability of the region abnormality thereof is different, that is, the reliability of the division threshold of the region thereof is different.
Specifically, in step (3.1), a gray value is used as a gray level, and the foreground confusion degree of the target area is obtained, and the specific calculation method is as follows:
wherein, Representing the foreground confusion degree of the target area; /(I)And/>The areas of the foreground region and the background region of the target region are respectively represented; /(I)Representing the number of gray levels in the target region; /(I)Information entropy representing gray values of all pixel points in a target area.
The area is the number of pixel points,The method is used for reflecting the area occupation ratio of the foreground area in the target area, wherein the larger the occupation ratio is, the larger the occupation area of the first area belonging to the foreground in the target area is, and the higher the degree of confusion in the foreground area is; the more the number of gray levels in the target area, the more the variety of gray values of the pixel points in the target area, and the lower the consistency of the gray values in the target area; the larger the information entropy of the target region, the higher the degree of confusion of the gray value distribution of the pixel points.
The printing defects in the printing process are mainly represented by the fact that a large number of discrete small areas exist inside a partial area of an image, so that the greater the degree of confusion of the discrete small areas in the image, the better the degree of expression of the first area.
Step (3.2), obtaining defect expression degree of the target area, wherein the specific calculation method comprises the following steps:
wherein, Representing the defect expression level of the target area; /(I)Representing the foreground confusion degree of the target area; /(I)Representing the variance of gray values of all pixel points in a foreground region of the target region; /(I)A segmentation factor representing the target region; /(I)Representing the minimum gray value of all pixel points in the foreground region of the target region; /(I)Representing the maximum gray value of all pixel points in the background area of the target area; /(I)Representing the segmentation threshold of the target region.
The defect expression level is used for describing the probability of printing defects in the foreground area of the target area, and the greater the defect expression level is, the target area is; the greater the degree of confusion of the pixel points in the foreground region, the greater the probability of the printing defect in the target region; the variance of the gray values of all the pixel points in the foreground region reflects the dispersion degree of the gray values of the pixel points, and the larger the dispersion degree of the gray values is, the larger the probability that the foreground region is a printing defect is; the partition coefficient reflects the partition effect of threshold partition on the target area, and the better the partition effect on the target area is, the more obvious the first area is.
In this embodiment, the variance of the gray value of the foreground pixel point and the average gray value of the background is calculated by counting the gray values of the pixels in the area, and the larger the variance is, the more discrete the gray value distribution of the pixels in the foreground area is, and the better the foreground defect is represented. Meanwhile, if the minimum value of the pixel gray values of the foreground region is far greater than the segmentation threshold value and the maximum value of the pixel gray values of the background region is far less than the segmentation threshold value, the segmentation effect of the current region is better, namely the first region is better in performance. And calculating the representing degree of the foreground region pixel point in the current region to the defect by combining the chaotic degree of the foreground, the segmentation effect and the gray value discrete degree.
And analyzing the current region segmentation result to obtain the defect expression degree of the current region foreground in the image, namely, the greater the defect expression degree of the current region foreground, and meanwhile, if the region area is greater, the higher the reliability of the current region abnormality.
And (3.3) multiplying the defect expression degree of the target area by the area of the target area, marking the multiplication result as a credibility factor, carrying out linear normalization on the credibility factors of all the target areas, marking the normalization result as abnormal credibility of the corresponding target area, marking the target area with the abnormal credibility greater than or equal to a preset credibility threshold value as a first area, and marking the target area outside the first area as a second area.
It should be noted that, the confidence threshold value is preset to 0.6 according to experience, and may be adjusted according to actual situations, and the embodiment is not specifically limited.
The probability of defects in the target region is reflected by the abnormal reliability, namely the probability of defects in the block has a lower contribution to the defect degree of the whole image and can be ignored, so that the region with higher abnormal reliability in the image is selected through screening, and the defect degree of the image region is judged.
The optimal segmentation threshold value of the printing image is obtained, and the specific calculation method comprises the following steps:
wherein, Representing an optimal segmentation threshold for the printed image; /(I)Representing the number of first areas in the printed image; /(I)Representing the/>, in the printed imageAbnormal credibility of the first areas; /(I)Representing the/>, in the printed imageA segmentation threshold for the first region.
It should be noted that, since the segmentation threshold of the first region in the printed image is already the optimal segmentation threshold obtained by the oxford method, and the first region is a region with abnormal reliability greater than the reliability threshold, the first region is already determined to have a printing defect, but the second region is not yet determined, so that the segmentation threshold of the second region is determined by using the first region in the embodiment to improve the segmentation effect of the second region, and therefore if the optimal segmentation threshold of the printed image obtained by weighting is used for segmenting all the target regions, the segmentation effect of the first region may be limited, so that the first region does not need to be subjected to threshold segmentation again.
Further, the optimal segmentation threshold of the printed image is used as the segmentation threshold of the second area in the printed image, the background area and the foreground area in the second area are acquired again, and the updated foreground area and the updated background area of the second area are recorded.
The first region and the second region are collectively referred to as a third region; the foreground region of the first region and the updated foreground region of the second region are collectively referred to as a final foreground region, and the background region of the first region and the updated background region of the second region are collectively referred to as a final background region;
the specific calculation method of the defect degree of the printed image comprises the following steps:
wherein, Representing the defect level of the printed image; /(I)Representing the number of third areas in the printed image; /(I)Representing the/>, in the printed imageThe area of the final foreground region of the third region; /(I)Representing the/>, in the printed imageThe area of the third region; And/> Respectively represent the/>, in the printed imageThe gray value average value of all pixel points in the final foreground area and the final background area of the third area; /(I)Representing absolute value symbols; /(I)Representing a sigmoid normalization function.
It should be noted that, in this embodiment, according to the number of pixel points of the foreground area in each area in the print image, the defect degree of the print defect in the print image is reflected, and the larger the ratio is, the larger the defect degree of the print defect is; in addition, the larger the difference of the gray value average value of all the pixel points in the foreground region and the background region of each region in the printed image, the larger the defect degree of the printing defect existing in the printed image.
So far, the defect degree of the printed image is obtained through the method.
Step S004: and detecting the printing quality of the packaging box according to the defect degree of the printed image.
Specifically, when the defect degree of the printed image is larger than a preset defect degree threshold, marking the packaging box corresponding to the printed image as unqualified in printing, and discarding and recycling.
It should be noted that, the defect level threshold value is preset to be 0.6 according to experience, and may be adjusted according to actual situations, and the embodiment is not particularly limited.
This embodiment is completed.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. The method for detecting the printing quality of the packaging box based on the image processing is characterized by comprising the following steps of:
acquiring an RGB printing image on the printed surface of the packaging box and a grey printing image;
Acquiring gradient amplitude values of edge lines in a printed image and pixel points in an RGB printed image, closing the edge lines which do not form a closed region according to gradient amplitude difference between the edge pixel points at the upper end points of the edge lines which do not form the closed region and adjacent pixel points, marking the closed region formed by each edge line in the printed image as a target region, acquiring a segmentation threshold value of each target region, and segmenting a background region and a foreground region of each target region;
Obtaining defect expression degrees of the target areas according to the area differences of the foreground areas and the background areas in the target areas and the gray value distribution conditions of the pixel points, fusing the defect expression degrees and the areas of the target areas to obtain abnormal credibility of the target areas, dividing all the target areas into a first area and a second area according to the abnormal credibility, weighting a segmentation threshold of the first area by using the abnormal credibility of the first area in a printing image to obtain an optimal segmentation threshold of the printing image, obtaining an updated foreground area and an updated background area of the second area by using the optimal segmentation threshold, and obtaining the defect degree of the printing image according to the gray value differences of the pixel points in the foreground areas and the background areas of the first area and the updated foreground areas and the updated background areas of the second area;
And detecting the printing quality of the packaging box according to the defect degree of the printed image.
2. The method for detecting the printing quality of the packaging box based on the image processing according to claim 1, wherein the method for acquiring the edge line in the printed image and the gradient amplitude of the pixel point in the RGB printed image comprises the following specific steps:
Performing edge detection on the printed image by using a Canny algorithm to obtain an edge image of the printed image, and obtaining edge lines in the edge image, which are not formed with a closed area, and marking the edge lines as first edge lines;
And (3) acquiring gradient amplitude values of any pixel point in the printed image under an R channel, a G channel and a B channel of the RGB printed image respectively by utilizing a Sobel operator.
3. The method for detecting the printing quality of the packaging box based on the image processing according to claim 2, wherein the method for closing the edge line without the closed area according to the gradient amplitude difference between the edge pixel point and the adjacent pixel point at the upper end point of the edge line without the closed area comprises the following specific steps:
Marking 8 neighborhood pixel points of any one edge pixel point in the edge image as neighborhood pixel points of the edge pixel points;
and starting from the end point of any first edge line, acquiring gradient difference parameters between an edge pixel point at any one end point of the first edge line and any one neighborhood pixel point of the edge pixel point at the end point, when the gradient difference parameters between the edge pixel point and the edge pixel point at the end point are larger than a preset first threshold value, taking the corresponding neighborhood pixel point as the edge pixel point of the first edge line, acquiring the corresponding edge pixel point at the end point of the first edge line again, and so on, so that the first edge line forms a closed area.
4. The method for detecting the printing quality of the packaging box based on the image processing according to claim 3, wherein the specific calculation method of the gradient difference parameter is as follows:
wherein, Represents an edge pixel point at any one end point of the first edge line and the/>, of the edge pixel pointGradient difference parameters between the adjacent pixel points; /(I)、/>And/>Representing gradient amplitude values of edge pixel points at any one end point of the first edge line under the R channel, the G channel and the B channel respectively; /(I)、/>And/>First/>, representing edge pixel point at any one end point of first edge lineGradient amplitude values of each neighborhood pixel point under the R channel, the G channel and the B channel respectively; /(I)Representing absolute value symbols.
5. The method for detecting the printing quality of the packaging box based on the image processing according to claim 1, wherein the steps of obtaining the segmentation threshold value of each target area and segmenting the background area and the foreground area of each target area comprise the following specific steps:
obtaining a segmentation threshold value of each target area by using an Ojin method, obtaining the number of corresponding pixel points of each gray value in any target in the target area, and marking the pixel point with the largest number of corresponding pixel points of each gray value in the target area as a first pixel point;
When the gray value of the first pixel points is larger than the segmentation threshold value of the target area, the areas formed by all the first pixel points in the target area are marked as background areas of the target area, and other areas except the background areas are foreground areas;
when the gray value of the first pixel points is smaller than or equal to the segmentation threshold value of the target area, the areas formed by all the first pixel points in the target area are marked as foreground areas of the target area, and the areas outside the foreground areas are marked as background areas.
6. The method for detecting the printing quality of the packaging box based on the image processing according to claim 1, wherein the method for obtaining the defect expression degree of the target area according to the area difference between the foreground area and the background area in the target area and the gray value distribution condition of the pixel points comprises the following specific steps:
the area ratio of the foreground area to the background area of the target area is marked as a first ratio; taking one gray level as one gray level, and acquiring the number of gray levels in a target area and the information entropy of the gray levels of all pixel points in the target area;
Recording the cumulative multiplication result of the information entropy of the first ratio, the number of gray levels in the target area and the gray values of all pixel points in the target area as the foreground confusion degree of the target area;
The absolute value of the difference between the minimum gray value of all the pixel points in the foreground area of the target area and the segmentation threshold value of the target area is recorded as a first difference value, and the absolute value of the difference between the maximum gray value of all the pixel points in the background area of the target area and the segmentation threshold value of the target area is recorded as a second difference value; marking the sum of the first difference value and the second difference value as a partition coefficient of the target area;
and recording the foreground confusion degree of the target area, the dividing coefficient and the cumulative multiplication result of the variances of the gray values of all pixel points in the foreground area of the target area as the defect expression degree of the target area.
7. The method for detecting the printing quality of the packaging box based on the image processing according to claim 1, wherein the method for obtaining the abnormal credibility of the target area by fusing the defect expression degree and the area of each target area is characterized by dividing all the target areas into a first area and a second area according to the abnormal credibility, and comprises the following specific steps:
Multiplying the defect expression degree of the target area by the area of the target area, marking the multiplication result as a credibility factor, carrying out linear normalization on the credibility factors of all the target areas, marking the normalization result as abnormal credibility of the corresponding target area, marking the target area with the abnormal credibility larger than or equal to a preset credibility threshold value as a first area, and marking the target area outside the first area as a second area.
8. The method for detecting the printing quality of the packaging box based on the image processing according to claim 1, wherein the method for weighting the segmentation threshold value of the first area by using the abnormal credibility of the first area in the printed image to obtain the optimal segmentation threshold value of the printed image and obtaining the updated foreground area and the updated background area of the second area by using the optimal segmentation threshold value comprises the following specific steps:
the specific calculation method of the optimal segmentation threshold value of the printed image comprises the following steps:
wherein, Representing an optimal segmentation threshold for the printed image; /(I)Representing the number of first areas in the printed image; /(I)Representing the/>, in the printed imageAbnormal credibility of the first areas; /(I)Representing the/>, in the printed imageA segmentation threshold for the first region;
And taking the optimal segmentation threshold of the printed image as the segmentation threshold of a second area in the printed image, and acquiring an updated background area and an updated foreground area in the second area, wherein the acquisition methods of the updated background area and the updated foreground area in the second area are the same as the acquisition methods of the background area and the foreground area of the target area.
9. The method for detecting the printing quality of the packaging box based on the image processing according to claim 1, wherein the obtaining the defect degree of the printing image according to the gray value difference of the pixel points in the foreground region and the background region of the first region and the updated foreground region and the updated background region of the second region and the areas of the foreground region and the updated foreground region comprises the following specific steps:
the first region and the second region are collectively referred to as a third region; the foreground region of the first region and the updated foreground region of the second region are collectively referred to as a final foreground region, and the background region of the first region and the updated background region of the second region are collectively referred to as a final background region;
the specific calculation method of the defect degree of the printed image comprises the following steps:
wherein, Representing the defect level of the printed image; /(I)Representing the number of third areas in the printed image; /(I)Representing the/>, in the printed imageThe area of the final foreground region of the third region; /(I)Representing the/>, in the printed imageThe area of the third region; /(I)And/>Respectively represent the/>, in the printed imageThe gray value average value of all pixel points in the final foreground area and the final background area of the third area; /(I)Representing absolute value symbols; /(I)Representing a sigmoid normalization function.
10. The method for detecting the printing quality of the packaging box based on the image processing according to claim 1, wherein the method for detecting the printing quality of the packaging box according to the defect degree of the printed image comprises the following specific steps:
When the defect degree of the printed image is larger than a preset defect degree threshold, marking the packaging box corresponding to the printed image as unqualified in printing, and discarding and recycling.
CN202410316641.1A 2024-03-20 2024-03-20 Packaging box printing quality detection method based on image processing Active CN117934456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410316641.1A CN117934456B (en) 2024-03-20 2024-03-20 Packaging box printing quality detection method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410316641.1A CN117934456B (en) 2024-03-20 2024-03-20 Packaging box printing quality detection method based on image processing

Publications (2)

Publication Number Publication Date
CN117934456A true CN117934456A (en) 2024-04-26
CN117934456B CN117934456B (en) 2024-05-28

Family

ID=90752444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410316641.1A Active CN117934456B (en) 2024-03-20 2024-03-20 Packaging box printing quality detection method based on image processing

Country Status (1)

Country Link
CN (1) CN117934456B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292321A (en) * 2020-03-13 2020-06-16 广东工业大学 Method for identifying defect image of insulator of power transmission line
US20210174489A1 (en) * 2019-12-04 2021-06-10 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for detecting a screen, and electronic device
CN115496218A (en) * 2022-11-16 2022-12-20 苏芯物联技术(南京)有限公司 Weld defect real-time detection method integrating evolutionary algorithm and fuzzy inference
CN117078672A (en) * 2023-10-13 2023-11-17 深圳市凯尔文电子有限公司 Intelligent detection method for mobile phone screen defects based on computer vision
CN117314925A (en) * 2023-11-30 2023-12-29 东莞市旺佳五金制品有限公司 Metal workpiece surface defect detection method based on computer vision
CN117541588A (en) * 2024-01-10 2024-02-09 大连建峰印业有限公司 Printing defect detection method for paper product
CN117705815A (en) * 2024-02-06 2024-03-15 天津滨海环球印务有限公司 Printing defect detection method based on machine vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210174489A1 (en) * 2019-12-04 2021-06-10 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for detecting a screen, and electronic device
CN111292321A (en) * 2020-03-13 2020-06-16 广东工业大学 Method for identifying defect image of insulator of power transmission line
CN115496218A (en) * 2022-11-16 2022-12-20 苏芯物联技术(南京)有限公司 Weld defect real-time detection method integrating evolutionary algorithm and fuzzy inference
CN117078672A (en) * 2023-10-13 2023-11-17 深圳市凯尔文电子有限公司 Intelligent detection method for mobile phone screen defects based on computer vision
CN117314925A (en) * 2023-11-30 2023-12-29 东莞市旺佳五金制品有限公司 Metal workpiece surface defect detection method based on computer vision
CN117541588A (en) * 2024-01-10 2024-02-09 大连建峰印业有限公司 Printing defect detection method for paper product
CN117705815A (en) * 2024-02-06 2024-03-15 天津滨海环球印务有限公司 Printing defect detection method based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐道磊 等: "基于X射线成像的铸件缺陷检测系统及噪声干扰去除", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2, 15 December 2013 (2013-12-15) *
梁承权 等: "一种基于改进Faster RCNN的易拉罐印刷缺陷检测方法", 《印刷与数字媒体技术研究》, vol. 1, no. 6, 10 December 2023 (2023-12-10) *

Also Published As

Publication number Publication date
CN117934456B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN107833220B (en) Fabric defect detection method based on deep convolutional neural network and visual saliency
US8121403B2 (en) Methods and systems for glyph-pixel selection
CN114494259B (en) Cloth defect detection method based on artificial intelligence
CN115829883A (en) Surface image denoising method for dissimilar metal structural member
CN109241973B (en) Full-automatic soft segmentation method for characters under texture background
CN108830149B (en) Target bacterium detection method and terminal equipment
CN113554629A (en) Strip steel red rust defect detection method based on artificial intelligence
CN115908371B (en) Plant leaf disease and pest degree detection method based on optimized segmentation
CN115984148B (en) Denoising enhancement method for high-throughput gene sequencing data
CN113012124B (en) Shoe print hole and embedded object feature detection and description method
CN116092015A (en) Road construction state monitoring method
CN116630813A (en) Highway road surface construction quality intelligent detection system
CN114998290A (en) Fabric flaw detection method, device, equipment and medium based on supervised mode
CN114862836A (en) Intelligent textile fabric printing and dyeing method and system based on data recognition graph
JP2008310817A (en) Method for detecting line structure from text map and image processor
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN115346126A (en) Side slope crack identification method
CN116934761B (en) Self-adaptive detection method for defects of latex gloves
CN109191482B (en) Image merging and segmenting method based on regional adaptive spectral angle threshold
CN116681696B (en) Mold quality monitoring method for automatic production equipment
CN117934456B (en) Packaging box printing quality detection method based on image processing
CN116993764A (en) Stomach CT intelligent segmentation extraction method
CN110766614A (en) Image preprocessing method and system of wireless scanning pen
CN113643290B (en) Straw counting method and device based on image processing and storage medium
WO2021009804A1 (en) Method for learning threshold value

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant