CN115311270B - Plastic product surface defect detection method - Google Patents
Plastic product surface defect detection method Download PDFInfo
- Publication number
- CN115311270B CN115311270B CN202211237278.1A CN202211237278A CN115311270B CN 115311270 B CN115311270 B CN 115311270B CN 202211237278 A CN202211237278 A CN 202211237278A CN 115311270 B CN115311270 B CN 115311270B
- Authority
- CN
- China
- Prior art keywords
- image
- gray
- window
- pixel point
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of data processing, in particular to a method for detecting surface defects of a plastic product, which collects a surface image of the plastic product; acquiring a gray image of the segmented image and a plurality of areas in the gray image, and acquiring the detection necessity of each area; combining the areas corresponding to the detection necessity larger than a preset necessity threshold into a to-be-detected image; in an image to be detected, a window is built by taking each pixel point as a center, aggregation characteristics of corresponding windows are obtained by calculating gray level changes of each pixel point in the same direction in each window, and the aggregation degree of a central point pixel of each corresponding window is obtained based on the aggregation characteristics in different directions; obtaining the change coefficient of each pixel point with the aggregation characteristics, obtaining a processed updated image by using the change coefficient, performing threshold segmentation on the updated image to obtain a binary image, and further identifying the defect position. The invention can improve the accuracy of threshold segmentation and has less calculation amount.
Description
Technical Field
The invention relates to the technical field of data processing, in particular to a method for detecting surface defects of a plastic product.
Background
In the production process of plastic products, the plastic products are extremely easily influenced by the production process and the environment. During the production process, the gas released from the plastic which is not completely cooled and hardened expands to generate the swelling and bubbling defects on the surface of the plastic product, the quality of the plastic product with the defects is poor, the defects are easy to occur, and the defects need to be detected.
In the traditional plastic product swelling and bubbling defect detection process, the defect area is obtained by carrying out threshold segmentation on the image through an OTSU threshold segmentation technology. However, since the sizes and shapes of the swollen bubbles are not uniformly distributed, a shadow region is generated in the acquired image under the illumination of detection equipment, so that the sizes and the shapes of the swollen bubbles which are wrong are distributed in a binary image formed by OTSU threshold segmentation, and the improvement of the production process of the plastic product is influenced.
Disclosure of Invention
In order to solve the technical problem, the invention provides a method for detecting surface defects of plastic products, which adopts the following technical scheme:
one embodiment of the invention provides a method for detecting surface defects of a plastic product, which comprises the following steps:
collecting a surface image of a plastic product, and performing semantic segmentation processing to obtain a segmented image; acquiring a gray level image of the segmented image;
carrying out region division on the gray level image to obtain a plurality of regions, and acquiring the detection necessity of each region according to the gray level value of pixel points in the regions; combining the areas corresponding to the detection necessity greater than a preset necessity threshold into an image to be detected;
in an image to be detected, a window is built by taking each pixel point as a center, aggregation characteristics of corresponding windows are obtained by calculating gray level changes of each pixel point in the same direction in each window, and the aggregation degree of a central point pixel of each corresponding window is obtained based on the aggregation characteristics in different directions;
for each pixel point with aggregation characteristics, acquiring a corresponding change coefficient based on the gray value of the pixel point and the mean value of the gray values of all the pixel points with aggregation characteristics, performing linear change processing on the gray values of the pixel points by using the change coefficient to obtain a processed updated image, performing threshold segmentation on the updated image to obtain a binary image, and further identifying a defect position.
Preferably, the method for acquiring the necessity of detection is as follows:
and calculating the occurrence probability of each gray level in each region, and further calculating the information entropy of all the gray levels in the region as the detection necessity of the region.
Preferably, the method for acquiring the aggregative characteristics comprises the following steps:
and calculating the gray level change difference of the central pixel point and all the surrounding pixel points in the same direction, setting a difference threshold value, and if the gray level change difference of each direction in the current window has a direction meeting the threshold value condition, determining that the aggregation characteristic exists in the current window.
Preferably, the method for obtaining the aggregation degree is as follows:
calculating the gray change difference of the upper window and the lower window in the same direction, comparing with a difference threshold value, recording the number of windows in any direction from the first window meeting the threshold value condition, calculating the number average value of each window in all directions, and taking the ratio of the number of the directions meeting the threshold value condition in the window where the central point pixel is positioned to the number average value as the aggregation degree.
Preferably, the method for obtaining the variation coefficient includes:
calculating the absolute value of the difference between the gray value of the pixel point and the gray average value, multiplying the aggregation degree, and normalizing the obtained product result; and obtaining a function value of the pixel point in a mapping function according to the size relation between the gray value of the pixel point and the gray average value, and calculating the product of the function value and the normalized product result as the change coefficient.
Preferably, the obtaining the processed updated image by performing the linear change processing on the gray-scale value of the pixel point by using the change coefficient includes:
and multiplying the original gray values by the corresponding change coefficients to obtain updated gray values for the pixel points with the aggregation characteristics, and replacing the gray values of all the pixel points with the aggregation characteristics with the updated gray values to form the updated image.
The embodiment of the invention at least has the following beneficial effects:
the necessity of detection of a swelling blister defect for each region is calculated by preprocessing the acquired image and by area-dividing the image. For the area image with high detection necessity, the calculation of the linear transformation coefficient is carried out according to the aggregation degree and the gray distribution characteristic of each pixel point, so that the transformed image is obtained, the gray value difference between the bubbling areas which may be swelling is reduced, the accuracy of threshold segmentation is greatly improved, and meanwhile, the calculated amount is less.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart illustrating steps of a method for detecting surface defects of a plastic product according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention for achieving the predetermined objects, the following detailed description of the method for detecting surface defects of plastic products according to the present invention, its specific implementation, structure, features and effects will be given in conjunction with the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the plastic product surface defect detection method provided by the invention in detail with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of steps of a method for detecting surface defects of a plastic product according to an embodiment of the present invention is shown, the method including the following steps:
s001, collecting a surface image of the plastic product, and performing semantic segmentation processing to obtain a segmented image; a grayscale image of the segmented image is acquired.
The method comprises the following specific steps:
the surface image of the plastic product is acquired by arranging the image acquisition device, and the detection and identification of the swelling and bubbling defects are realized. Wherein, the specific image acquisition device is arranged above the conveyor belt in the transportation process of the plastic products. The specific image acquisition device comprises an industrial camera, a light source, a bracket, an image analysis system and the like.
Carrying out semantic segmentation processing on the acquired image, wherein the specific contents are as follows: the network used is a DNN network, and the data set is an acquired plastic product surface image data set; the semantic segmentation network mainly divides the collected images into two types, wherein one type is a plastic product type and is manually marked as 1; the other type is a background type, and the manual marking is carried out to be 0; the main purpose of the network is classification, and the loss function used is a cross-entropy function.
And graying and denoising the semantically segmented image to obtain a grayscale image of the segmented image.
Step S002, carrying out region division on the gray level image to obtain a plurality of regions, and acquiring the detection necessity of each region according to the gray level values of pixel points in the regions; and combining the areas corresponding to the detection necessity greater than a preset necessity threshold into one image to be detected.
The method comprises the following specific steps:
1. and dividing the gray level image into a plurality of regions.
Because only partial areas on the surface of the plastic product have the defect of valley swelling and bubbling, in order to reduce the calculation amount, the whole image is subjected to region division processing, the necessity of surface swelling and bubbling in each area in the image is calculated, and the areas with high necessity are subjected to the next analysis.
Due to the influence of illumination of detection equipment and the convex pit characteristic of the swelling bubbles on the surface of the plastic product, different gray level differences can be generated in the area where the defects of the swelling bubbles on the surface of the plastic product are located, and therefore the whole preprocessed image is divided into two partsAnd equally dividing the regions, and determining the necessity of detection of each region according to the gray distribution characteristics of each region. The number of a may be determined according to the specific implementation of the implementer, and the embodiment of the present invention gives an empirical reference value of a =9.
2. And acquiring the detection necessity of each region according to the gray value of the pixel points in the region.
And calculating the occurrence probability of each gray level in each region, and further calculating the information entropy of all the gray levels in the region as the detection necessity of the region.
Calculating the necessity of detection in each region by calculating the characteristics of the gradation distribution in each region, whereinNecessity of detection of an area>The calculation expression of (a) is:
in the formula (I), the compound is shown in the specification,indicates the fifth->All gray levels of the respective region->Represents a fifth or fifth party>The minimum gray level of an area is,indicates the fifth->A maximum gray level of each region; />Indicates the fifth->Iteration of the gray level range of the individual regions, i.e. the gray level rangeThe iteration of (1); />Indicates the fifth->Gradation level in a region is>Probability of occurrence, i.e. < >>WhereinIndicates the fifth->Gray level in each area is->The number of the pixel points is greater or less>Indicates the fifth->The total number of pixels in each region.
3. And combining the areas corresponding to the detection necessity greater than a preset necessity threshold into one image to be detected.
Setting a necessity thresholdIf it is ^ h>The detection necessity of each region is greater than that ofA set necessity threshold->Then it is first +>Each region needs to be calculated in step bc. For convenience of calculation, all the regions meeting the threshold requirement are combined to generate a complete image, namely the image to be detected.
Wherein the necessity threshold valueDepending on the particular implementation of the implementer, embodiments of the present invention provide empirical reference values, based on whether or not the device is in a home or home position>。
And S003, constructing a window by taking each pixel point as a center in the image to be detected, calculating the gray scale change of each pixel point in the same direction in each window to obtain the aggregation characteristic of the corresponding window, and obtaining the aggregation degree of the central point pixel of the corresponding window based on the aggregation characteristics in different directions.
The method comprises the following specific steps:
1. an aggregative feature is obtained for each window.
And calculating the gray level change difference of the central pixel point and all the surrounding pixel points in the same direction, setting a difference threshold value, and if the gray level change difference of each direction in the current window has a direction meeting the threshold value condition, determining that the aggregation characteristic exists in the current window.
Due to the influence of illumination of detection equipment and the distribution characteristics of the swelling bubbles on the surface of the plastic product, shadows can be generated in the image, the distribution of the shadows can be influenced by the distribution area of the swelling bubbles, if the traditional OTSU algorithm is used for threshold segmentation, an error area, namely a shadow part, can exist in the obtained binary image, and the distribution characteristics of aggregative property can reflect the convex characteristics of the swelling bubbles and the corresponding shadow characteristics.
Due to the convex nature of the swollen blisters, while the swollen blisters are not uniform, the gray scale distribution at the slope of the crater created by the swollen blisters is similarly continuous and exhibits aggregative properties, and the distribution characteristics of the aggregative properties may reflect the convex nature of the swollen blisters, as well as the corresponding shadow nature.
By passingAnd the window analyzes the gray distribution inside the window and between windows to calculate the aggregation degree of the gray distribution. Wherein the ^ th or ^ th of the image detected with the blister to be swollen>Taking the window with the central pixel point as an example, the gray level variation difference between the central pixel point and all the surrounding pixel points in the same direction is calculated. Wherein at the fifth->In the window with individual pixel as central pixel, the fifth or fifth judgment>The gray value of each pixel point is->,/>And 5 represents a center pixel point. For simplicity of calculation, the direction in which the change in gray level in the window is possible is set to ^>I.e. the direction present in the corresponding window is pick>、、/>And &>Is recorded as>Indicating (because the first ^ within a window)>The direction of the pixel point and the set angle subscript is the same so as to be->Indicate, e.g. < >>Is correspondingly the ^ th->When the connection line of each pixel point and the central pixel point is on>). Then the £ th of each corresponding window>Degree of difference in gray scale change in individual directions>The computational expression of (a) is:
in the formula (I), the compound is shown in the specification,(iv) a fourth ^ representing an image to detect a swollen blister>In the window with the individual pixel points as the central pixel points, the central pixel point (<)>) The gray value of (a); />Indicating the ^ th or greater under the current window>The gray value of each pixel point (corresponding to the angle subscript, specifically explaining the content set with reference to the angle); />Represents and->The gray value of the pixel point in the window with the same direction is judged if the calculated pixel point is->When the pixel point in the same direction in the window is ^ H>;/>To prevent the denominator from being 0. The change proportion of the pixel points in the same direction in the window is calculated to represent the continuous variability of the pixel points in the current window. If in a certain direction->A value of approximately 1 indicates that there is a continuous change in that direction within the window.
Setting a difference thresholdIf the direction meeting the threshold condition exists in the gray level change difference degree of each direction in the current window, the aggregative characteristic exists in the current window. Wherein the difference threshold value can be determined according to the concrete implementation of the implementer, embodiments of the invention provide an empirical reference value->。
2. And acquiring the aggregation degree of the central point pixel of the corresponding window based on the aggregation characteristics in different directions.
Calculating the gray change difference of the upper window and the lower window in the same direction, comparing with a difference threshold value, recording the number of windows in any direction from the first window meeting the threshold value condition, calculating the number average value of each window in all directions, and taking the ratio of the number of the directions meeting the threshold value condition in the window where the central point pixel is located to the number average value as the aggregation degree.
For the window meeting the threshold condition, analyzing the direction characteristic of the next window, wherein the direction of the next sliding window is the same as the direction meeting the threshold condition in the previous window, calculating the gray level change difference degree of the next window in the direction similar to the same calculation, and calculating the gray level change difference degree of the next window in the direction similar to the difference threshold valueMaking a comparison, recording the number of windows in a direction from the first window that meets a threshold condition, i.e. in a manner that is ++>The window with the central pixel point as the initial window (satisfying the threshold)Value condition), the number of windows in the direction in which the threshold condition is met, is recorded as @>。
Specifically, the following description is provided: in order to further reduce the amount of calculation for the sliding window operation, the calculation of the position of the next window is calculated from the direction satisfying the threshold condition calculated by the previous window. To a first orderTaking a window with a central pixel point as an example, for a direction meeting the threshold condition ^ H>Calculating the tangential (clockwise) direction of the direction>Based on the step size>(according to the size setting of the sliding window), obtaining the next window, and recalculating the gray change difference degree of the window. And for the window which does not meet the threshold condition, indicating that no gray level change exists in the current window, and not meeting the aggregative characteristic, performing sliding window operation with the step length in the row direction being 3.
Calculating the aggregation degree of the pixel points with the aggregation characteristicsThe window with the central pixel point as the pixel point is the initial window (meeting the threshold condition), and the number of the windows in a certain direction meeting the threshold condition->The larger, ifPersonal portraitThe smaller the aggregation directivity of the window with the pixel point as the central pixel point is, the more the pixel point is in the periphery of the aggregation trend direction, the lower the corresponding aggregation degree is; and vice versa.
Presenting a second image that satisfies a threshold condition for blister detection to be distended the lower the degree of aggregationGathering degree of pixel points>The calculation expression of (a) is:
in the formula (I), the compound is shown in the specification,represents on a fifth->The number of directions satisfying the threshold condition in the window with the central pixel point as the pixel point; />Indicating the ^ th or greater under the current window>Iteration in each direction; />Represents a fifth or fifth party>The number of windows in a certain direction which meets the threshold condition, wherein the window with the central pixel point as the pixel point is an initial window (meets the threshold condition); />Indicates that the current window is down pick>The individual directions satisfy the mean of the threshold conditional directions. Wherein, the window using the ith pixel point as the central pixel point is used as the initial window (satisfying the threshold condition), the more the number of windows in a certain direction satisfying the threshold condition is, if the ith pixel point is used as the central pixel point, the greater the number of windows in the certain direction satisfying the threshold condition is>The smaller the aggregation directivity of the window with the central pixel point as the pixel point is, the more the pixel point is in the periphery in the aggregation trend direction, and the lower the corresponding aggregation degree is; and vice versa. />
And obtaining the aggregation degree of each pixel point with the aggregation characteristics.
According to the characteristic of swelling and bubbling, the aggregation degree of each pixel point is calculated, the distribution characteristics of the pixel points in the image are reflected by calculating the aggregation degree of each pixel point, and the self-adaptive calculation of the linear transformation coefficient is more accurate.
Step S004, for each pixel point with aggregation characteristics, obtaining a corresponding change coefficient based on the gray value of the pixel point and the mean value of the gray values of all the pixel points with aggregation characteristics, performing linear change processing on the gray value of the pixel point by using the change coefficient to obtain a processed updated image, performing threshold segmentation on the updated image to obtain a binary image, and further identifying the defect position.
The method comprises the following specific steps:
1. and obtaining the change coefficient corresponding to each pixel point with the aggregation characteristics.
Calculating the absolute value of the difference between the gray value of the pixel point and the gray average value, multiplying the aggregation degree, and normalizing the obtained product result; and obtaining a function value of the pixel point in the mapping function according to the size relation between the gray value of the pixel point and the gray average value, and calculating the product of the function value and the normalized product result to be used as a change coefficient.
Due to the influence of image acquisition equipment, the difference of gray values between a high gray value surface and a shadow surface of the difference of the swelling bubbling area in the image is large, and the boundary of the detected swelling bubbling area is wrong by using a traditional OTSU threshold segmentation method, so that the acquired image to be detected by the swelling bubbling needs to be enhanced. And acquiring the aggregation degree of each pixel point with the aggregation degree according to the previous step, and carrying out linear change processing on the gray value of each corresponding pixel point with the aggregation degree.
For each pixel point with aggregation characteristics, the degree of gray value change is related to the aggregation degree and the gray value characteristics of the pixel point. If the lower the aggregation degree of the pixel points with the aggregation characteristics is, the lower the difference between the gray value and the average value of the pixel points is, the lower the corresponding change coefficient is; if the higher the aggregation degree of the pixel points with the aggregation characteristics is, the higher the difference between the gray value and the average value of the pixel points is, the lower the corresponding change coefficient is. Then there is a second of the aggregate characteristicChange coefficient of each pixel point->The calculation expression of (a) is:
in the formula (I), the compound is shown in the specification,fifth ^ which indicates the presence of an aggregate characteristic>The gray value of each pixel point; />Expressing the gray average value of the pixel points with the aggregation characteristics; />A fifth->The aggregation degree of each pixel point; />Representing a hyperbolic tangent function;a fifth->A mapping function of individual pixel point->Wherein->For the hyperparameter, which is used for adjusting the value of the transformation coefficient, an empirical reference value is given in the present case>. If the aggregation degree of the pixel points with the aggregation characteristics is lower, the difference between the gray value and the average value of the pixel points is lower, and the corresponding change coefficient is lower; if the higher the aggregation degree of the pixel points with the aggregation characteristics is, the higher the difference between the gray value and the average value of the pixel points is, the lower the corresponding change coefficient is. For the mapping function, the characteristic of decreasing the gray value of a point with a high gray value and the characteristic of increasing the gray value of a point with a low gray value are realized.
2. And carrying out linear change processing on the gray value of the pixel point by using the change coefficient to obtain a processed updated image.
And multiplying the original gray values by the corresponding change coefficients for the pixel points with the aggregation characteristics to obtain updated gray values, and replacing the gray values of all the pixel points with the aggregation characteristics with the updated gray values to form an updated image.
Carrying out linear change processing on the pixel points with the aggregation characteristics, wherein the formula of the linear change is as follows:。/>
in order to increase the accuracy of OTSU threshold segmentation, the gray value of each corresponding pixel point with the aggregation degree is subjected to linear change processing. The degree of gray value variation thereof is related to the degree of aggregation thereof and the gray value characteristics thereof.
3. And performing threshold segmentation on the updated image to obtain a binary image, and further identifying the defect position.
And performing threshold segmentation processing by adopting an OTSU algorithm to obtain a self-adaptive optimal threshold, setting the pixel value of the pixel point which is greater than the optimal threshold to be 1, and setting the pixel value of the pixel point which is less than the optimal threshold to be 0 to obtain a processed binary image. And then carrying out defect detection according to the binary image, carrying out connected domain analysis, and identifying the position of the swelling and bubbling defect in the image.
According to the aggregation degree and the gray distribution characteristics of the pixels with the aggregation characteristics, the transformation coefficient of each pixel with the aggregation characteristics is calculated, so that the image is subjected to linear change, the gray value difference between bubbling areas which may be swollen is reduced, and the OTSU threshold segmentation accuracy is greatly improved.
In summary, the embodiment of the invention collects the surface image of the plastic product and performs semantic segmentation processing to obtain a segmented image; acquiring a gray level image of the segmentation image; carrying out region division on the gray level image to obtain a plurality of regions, and acquiring the detection necessity of each region according to the gray level value of pixel points in the regions; combining the areas corresponding to the detection necessity greater than a preset necessity threshold into an image to be detected; in an image to be detected, a window is built by taking each pixel point as a center, aggregation characteristics of corresponding windows are obtained by calculating gray level changes of each pixel point in the same direction in each window, and the aggregation degree of a central point pixel of each corresponding window is obtained based on the aggregation characteristics in different directions; for each pixel point with aggregation characteristics, obtaining a corresponding change coefficient based on the gray value of the pixel point and the mean value of the gray values of all the pixel points with aggregation characteristics, performing linear change processing on the gray value of the pixel point by using the change coefficient to obtain a processed updated image, performing threshold segmentation on the updated image to obtain a binary image, and further identifying a defect position. The embodiment of the invention reduces the gray value difference between bubbling areas which are possibly swollen, so that the accuracy of threshold segmentation is greatly improved, and the calculated amount is less.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And that specific embodiments have been described above. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts in the embodiments are referred to each other, and each embodiment focuses on differences from other embodiments.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; modifications of the technical solutions described in the foregoing embodiments, or equivalents of some technical features thereof, are not essential to the spirit of the technical solutions of the embodiments of the present application, and are all included in the scope of the present application.
Claims (6)
1. A method for detecting surface defects of plastic products is characterized by comprising the following steps:
collecting a surface image of a plastic product, and performing semantic segmentation processing to obtain a segmented image; acquiring a gray level image of the segmentation image;
dividing the gray level image into a plurality of regions, and acquiring the detection necessity of each region according to the gray level values of pixel points in the regions; combining the areas corresponding to the detection necessity greater than a preset necessity threshold into an image to be detected;
in an image to be detected, a window is built by taking each pixel point as a center, aggregation characteristics of corresponding windows are obtained by calculating gray level changes of each pixel point in the same direction in each window, and the aggregation degree of a central point pixel of each corresponding window is obtained based on the aggregation characteristics in different directions;
for each pixel point with aggregation characteristics, obtaining a corresponding change coefficient based on the gray value of the pixel point and the mean value of the gray values of all the pixel points with aggregation characteristics, performing linear change processing on the gray value of the pixel point by using the change coefficient to obtain a processed updated image, performing threshold segmentation on the updated image to obtain a binary image, and further identifying a defect position.
2. The method for detecting the surface defects of the plastic product as claimed in claim 1, wherein the method for acquiring the detection necessity comprises:
and calculating the occurrence probability of each gray level in each region, and further calculating the information entropy of all gray levels in the region as the detection necessity of the region.
3. The method for detecting surface defects of a plastic product as claimed in claim 1, wherein the method for obtaining the aggregative features comprises:
and calculating the gray level change difference of the central pixel point and all the surrounding pixel points in the same direction, setting a difference threshold value, and if the gray level change difference of each direction in the current window has a direction meeting the threshold value condition, determining that the aggregation characteristic exists in the current window.
4. The method for detecting surface defects of plastic products as claimed in claim 3, wherein the aggregation degree is obtained by:
calculating the gray change difference of the upper window and the lower window in the same direction, comparing with a difference threshold value, recording the number of windows in any direction from the first window meeting the threshold value condition, calculating the number average value of each window in all directions, and taking the ratio of the number of the directions meeting the threshold value condition in the window where the central point pixel is positioned to the number average value as the aggregation degree.
5. The method for detecting the surface defects of the plastic product as claimed in claim 1, wherein the method for obtaining the variation coefficient comprises the following steps:
calculating the absolute value of the difference between the gray value of the pixel point and the gray average value, multiplying the aggregation degree, and normalizing the obtained product result; and obtaining a function value of the pixel point in a mapping function according to the size relation between the gray value of the pixel point and the gray average value, and calculating the product of the function value and the normalized product result as the change coefficient.
6. The method for detecting surface defects of plastic products according to claim 1, wherein the step of performing linear change processing on the gray-level values of the pixel points by using the change coefficients to obtain the processed updated image comprises:
and multiplying the original gray values by the corresponding change coefficients for the pixel points with the aggregation characteristics to obtain updated gray values, and replacing the gray values of all the pixel points with the aggregation characteristics with the updated gray values to form the updated image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211237278.1A CN115311270B (en) | 2022-10-11 | 2022-10-11 | Plastic product surface defect detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211237278.1A CN115311270B (en) | 2022-10-11 | 2022-10-11 | Plastic product surface defect detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115311270A CN115311270A (en) | 2022-11-08 |
CN115311270B true CN115311270B (en) | 2023-04-07 |
Family
ID=83868454
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211237278.1A Active CN115311270B (en) | 2022-10-11 | 2022-10-11 | Plastic product surface defect detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115311270B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115601362B (en) * | 2022-12-14 | 2023-03-21 | 临沂农业科技职业学院(筹) | Welding quality evaluation method based on image processing |
CN116087208B (en) * | 2023-01-20 | 2023-09-29 | 广东省中山市质量计量监督检测所 | Plastic product detecting system based on image recognition |
CN115802056B (en) * | 2023-01-31 | 2023-05-05 | 南通凯沃智能装备有限公司 | User data compression storage method for mobile terminal |
CN116051563B (en) * | 2023-03-31 | 2023-06-16 | 深圳市美亚迪光电有限公司 | Detection device for detecting surface flatness of lamp panel of GOB surface sealing technology |
CN116229438B (en) * | 2023-05-04 | 2023-07-21 | 山东超越纺织有限公司 | Spinning quality visual identification system |
CN116503404B (en) * | 2023-06-27 | 2023-09-01 | 梁山县创新工艺品股份有限公司 | Plastic toy quality detection method and device, electronic equipment and storage medium |
CN116563739B (en) * | 2023-07-07 | 2023-09-22 | 中铁九局集团第一建设有限公司 | Highway construction progress detection method based on computer vision |
CN116645429B (en) * | 2023-07-25 | 2023-10-20 | 山东中胜涂料有限公司 | Visual-aided paint production sample color analysis and detection method |
CN116703251B (en) * | 2023-08-08 | 2023-11-17 | 德润杰(山东)纺织科技有限公司 | Rubber ring production quality detection method based on artificial intelligence |
CN116758056B (en) * | 2023-08-09 | 2023-12-26 | 广东乾威精密连接器有限公司 | Electrical terminal production defect detection method |
CN116740070B (en) * | 2023-08-15 | 2023-10-24 | 青岛宇通管业有限公司 | Plastic pipeline appearance defect detection method based on machine vision |
CN116735612B (en) * | 2023-08-15 | 2023-11-07 | 山东精亿机械制造有限公司 | Welding defect detection method for precise electronic components |
CN116797599B (en) * | 2023-08-22 | 2023-11-21 | 山东奥晶生物科技有限公司 | Stevioside quality online detection method and main control equipment |
CN117274247B (en) * | 2023-11-20 | 2024-03-29 | 深圳市海里表面技术处理有限公司 | Visual detection method for quality of LTCC conductor surface coating |
CN117876374B (en) * | 2024-03-13 | 2024-05-28 | 西安航科创星电子科技有限公司 | Metal slurry wiring hole filling and monitoring method for HTCC ceramic |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001203855A (en) * | 2000-01-18 | 2001-07-27 | Dainippon Printing Co Ltd | Image input device |
CN111696123A (en) * | 2020-06-15 | 2020-09-22 | 荆门汇易佳信息科技有限公司 | Remote sensing image water area segmentation and extraction method based on super-pixel classification and identification |
CN115082482B (en) * | 2022-08-23 | 2022-11-22 | 山东优奭趸泵业科技有限公司 | Metal surface defect detection method |
-
2022
- 2022-10-11 CN CN202211237278.1A patent/CN115311270B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115311270A (en) | 2022-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115311270B (en) | Plastic product surface defect detection method | |
CN113689428B (en) | Mechanical part stress corrosion detection method and system based on image processing | |
CN115311292B (en) | Strip steel surface defect detection method and system based on image processing | |
CN115239704B (en) | Accurate detection and repair method for wood surface defects | |
CN114897896B (en) | Building wood defect detection method based on gray level transformation | |
CN116912261B (en) | Plastic mold injection molding surface defect detection method | |
CN116188468B (en) | HDMI cable transmission letter sorting intelligent control system | |
CN114820625B (en) | Automobile top block defect detection method | |
CN114723705A (en) | Cloth flaw detection method based on image processing | |
CN116309577B (en) | Intelligent detection method and system for high-strength conveyor belt materials | |
CN115861290A (en) | Method for detecting surface defects of skin-touch wooden door | |
CN116883408B (en) | Integrating instrument shell defect detection method based on artificial intelligence | |
CN116246174B (en) | Sweet potato variety identification method based on image processing | |
CN117522864B (en) | European pine plate surface flaw detection method based on machine vision | |
CN114998341A (en) | Gear defect detection method and system based on template matching | |
CN117541582B (en) | IGBT insulation quality detection method for high-frequency converter | |
CN115222735B (en) | Metal mold quality detection method based on pockmark defects | |
CN116206208B (en) | Forestry plant diseases and insect pests rapid analysis system based on artificial intelligence | |
CN117388263A (en) | Hardware terminal quality detection method for charging gun | |
CN115841600B (en) | Deep learning-based sweet potato appearance quality classification method | |
CN112862767B (en) | Surface defect detection method for solving difficult-to-distinguish unbalanced sample based on metric learning | |
CN115018792A (en) | Deep-drawing part wrinkling detection method based on computer vision technology | |
CN117934469B (en) | Visual detection method for production quality of mining high-voltage frequency converter shell | |
CN117635507B (en) | Plastic particle online visual detection method and system | |
CN117830300B (en) | Visual-based gas pipeline appearance quality detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |