CN113962993A - Paper cup raw material quality detection method based on computer vision - Google Patents

Paper cup raw material quality detection method based on computer vision Download PDF

Info

Publication number
CN113962993A
CN113962993A CN202111570567.9A CN202111570567A CN113962993A CN 113962993 A CN113962993 A CN 113962993A CN 202111570567 A CN202111570567 A CN 202111570567A CN 113962993 A CN113962993 A CN 113962993A
Authority
CN
China
Prior art keywords
point
pixel points
pixel
texture
neighborhoods
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111570567.9A
Other languages
Chinese (zh)
Other versions
CN113962993B (en
Inventor
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Hangda Packaging Co ltd
Original Assignee
Wuhan Linshan Industry And Trade Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Linshan Industry And Trade Co ltd filed Critical Wuhan Linshan Industry And Trade Co ltd
Priority to CN202111570567.9A priority Critical patent/CN113962993B/en
Publication of CN113962993A publication Critical patent/CN113962993A/en
Application granted granted Critical
Publication of CN113962993B publication Critical patent/CN113962993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence, in particular to a paper cup raw material quality detection method based on computer vision, which comprises the following steps: acquiring a gray scale image of the laminating paper; acquiring a texture vector of a pixel point; acquiring a 1 st region, comprising: obtaining the consistency ratio of the eight neighborhood pixel points and the initial central point; carrying out first round combination on eight neighborhood pixel points according to the consistency rate; calculating the consistency rate of the eight neighborhood pixel points of the newly added pixel points and the region where the initial central point is located; performing second round combination on the eight neighborhood pixels of the newly added pixel points according to the consistency rate of the eight neighborhood pixels of the newly added pixel points and the region where the initial central point is located; continuously repeating the steps of the second round of calculation and combination to obtain a 1 st area; iteratively dividing the residual pixel points according to the method for obtaining the 1 st area to obtain all areas with different film coating thicknesses; and evaluating the quality of the laminating paper according to different laminating thickness areas. The method is used for detecting the quality of the laminating paper, and can improve the accuracy of quality detection of the laminating paper.

Description

Paper cup raw material quality detection method based on computer vision
Technical Field
The invention relates to the field of artificial intelligence, in particular to a paper cup raw material quality detection method based on computer vision.
Background
The hot-drink paper cup is usually made of PE (polyethylene) film coated paper, which is a composite material formed by coating plastic particles on the surface of paper by a casting machine and can resist oil, water and heat. If the PE film coated paper is not uniformly coated, the manufactured paper cup may have the defect of water leakage in the place where the film coating is thinner, so that the quality of the used PE film coated paper needs to be detected before the paper cup is manufactured.
The existing quality detection means for PE laminating paper mainly adopts a manual spot check mode and a threshold segmentation mode. Wherein, the manual spot check mode is mainly to detect the thickness of the laminating paper according to the experience of operators; the threshold segmentation mode is to segment areas with inconsistent thickness in the laminating paper according to the gray level of the image.
However, the PE film coated paper used for manufacturing the paper cup has small difference between the film coated area and the film coated area, so that manual sampling inspection is easy to miss detection and error detection; the method for segmenting the regions with inconsistent film thickness by the threshold value according to the gray level of the image is interfered by the texture and light rays on the PE film coating paper, and the accuracy is low. Therefore, a method for improving the accuracy and efficiency of quality detection of the PE laminating paper is needed.
Disclosure of Invention
The invention provides a paper cup raw material quality detection method based on computer vision, which comprises the following steps: acquiring a gray scale image of the laminating paper; acquiring a texture vector of a pixel point; acquiring a 1 st region, comprising: obtaining the consistency ratio of the eight neighborhood pixel points and the initial central point; carrying out first round combination on eight neighborhood pixel points according to the consistency rate; calculating the consistency rate of the eight neighborhood pixel points of the newly added pixel points and the region where the initial central point is located; performing second round combination on the eight neighborhood pixels of the newly added pixel points according to the consistency rate of the eight neighborhood pixels of the newly added pixel points and the region where the initial central point is located; continuously repeating the steps of the second round of calculation and combination to obtain a 1 st area; iteratively dividing the residual pixel points according to the method for obtaining the 1 st area to obtain all areas with different film coating thicknesses; the method comprises the steps of evaluating the quality of the PE coated paper according to different coated thickness areas, analyzing the PE coated paper image to obtain texture vectors of pixel points, iteratively combining the pixel points in the image according to texture characteristics of each pixel point and eight neighborhood pixel points to obtain areas with different coated thicknesses, and finally evaluating the quality of the PE coated paper according to the different coated thickness areas.
In order to achieve the purpose, the invention adopts the following technical scheme that the paper cup raw material quality detection method based on computer vision comprises the following steps:
s1: and obtaining the gray level image of the PE laminating paper.
S2: and acquiring the texture vector of each pixel point in the gray image according to the gray value of the pixel point.
S3: obtaining a 1 st thickness region comprising:
s301: and selecting the pixel point with the maximum texture brightness in the texture vector as an initial central point, and calculating to obtain the consistency ratio of the pixel point in the eight neighborhoods of the initial central point and the region where the initial central point is located.
S302: and merging the pixels in the eight neighborhoods of the initial central point according to the consistency rate to obtain all newly added pixels and residual pixels.
S303: and respectively taking each newly added pixel point as a central point, and calculating the consistency ratio of the pixel points of the undivided region in the eight neighborhoods of all the newly added pixel points to the region where the initial central point is located.
S304: and merging the pixels in the non-divided regions in the eight neighborhoods of the newly added pixels according to the consistency ratio of the pixels in the non-divided regions in the eight neighborhoods of all the newly added pixels to the region in which the initial center point is located, so as to obtain all the newly added pixels and the residual pixels.
S305: and continuously repeating the steps S303-S304 until no new pixel points are added, and obtaining a 1 st thickness area and residual pixel points.
S4: and selecting the pixel point with the maximum texture brightness from the residual pixel points as a new initial central point, obtaining a 2 nd thickness area and the residual pixel points according to the method for obtaining the 1 st thickness area, and sequentially until all the pixel points are divided, thereby obtaining all areas with different laminating thicknesses.
S5: and evaluating the quality of the PE laminating paper according to different laminating thickness areas.
Further, according to the paper cup raw material quality detection method based on computer vision, the texture vector of each pixel point in the gray image is obtained according to the following mode:
and setting the size of a sliding window, and carrying out sliding window detection on the gray level image to obtain a gray level sequence of all pixel points in each window.
And acquiring an upper quartile, a lower quartile, a maximum value and a minimum value in the gray value sequence.
And calculating the mean value of the gray values between the maximum value and the upper quartile in the gray value sequence to obtain the texture brightness of the central pixel point of each window.
And calculating the mean value of the gray values between the minimum value and the lower quartile in the gray value sequence to obtain the background color brightness of the central pixel point of each window.
And calculating the difference value between the texture brightness and the background brightness to obtain the texture definition of the central pixel point of each window.
And taking the texture brightness, the background brightness and the texture definition as three components of the texture vector to obtain the texture vector of each pixel point in the gray level image.
Further, according to the paper cup raw material quality detection method based on computer vision, the process of obtaining the coincidence rate of pixel points in eight neighborhoods of the initial center point and the area where the initial center point is located is as follows:
and calculating cosine similarity between the texture vectors of the initial central point and the pixel points in the eight neighborhoods of the initial central point to obtain similarity and similarity sequences of the initial central point and the pixel points in the eight neighborhoods of the initial central point.
Calculating the consistency ratio of the pixel points in the eight neighborhoods of the initial central point and the area where the initial central point is located according to the similarity and the similarity sequence, wherein the expression of the consistency ratio is as follows:
Figure 100002_DEST_PATH_IMAGE002
in the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE004
is the initial center point of eight neighborhoods
Figure 100002_DEST_PATH_IMAGE006
Each pixel point
Figure 100002_DEST_PATH_IMAGE008
And the initial center point
Figure 100002_DEST_PATH_IMAGE010
The rate of coincidence of the areas in which they are located,
Figure 100002_DEST_PATH_IMAGE012
is a pixel point
Figure 269962DEST_PATH_IMAGE008
And the initial center point
Figure 289871DEST_PATH_IMAGE010
The degree of similarity of (a) to (b),
Figure 100002_DEST_PATH_IMAGE014
is the initial center point of eight neighborhoods
Figure 100002_DEST_PATH_IMAGE016
The serial number of each pixel point in the image,
Figure 100002_DEST_PATH_IMAGE018
is a pixel point
Figure 100002_DEST_PATH_IMAGE020
And the initial center point
Figure 89200DEST_PATH_IMAGE010
The degree of similarity of (a) to (b),
Figure 100002_DEST_PATH_IMAGE022
the number of pixel points in the eight neighborhoods of the initial central point,
Figure 100002_DEST_PATH_IMAGE024
is a sequence of the degree of similarity,
Figure 100002_DEST_PATH_IMAGE026
is the maximum value in the similarity sequence.
Further, according to the paper cup raw material quality detection method based on computer vision, the process of merging the pixel points in the eight neighborhoods of the initial central point is as follows:
setting a threshold value, and judging the relation between the coincidence rate of each pixel point in the eight neighborhoods of the initial central point and the area where the initial central point is positioned and the threshold value.
And when the consistency rate of each pixel point in the eight neighborhoods of the initial central point and the area where the initial central point is located is greater than a threshold value, merging the pixel point into the area where the initial central point belongs.
And when the consistency rate of each pixel point in the eight neighborhoods of the initial central point and the region where the initial central point is located is not greater than the threshold value, the pixel points are not merged.
Further, according to the paper cup raw material quality detection method based on computer vision, the consistency rate of pixel points of an undivided region in eight neighborhoods of all newly added pixel points and a region where an initial central point is located is obtained according to the following mode:
and calculating the similarity between the pixel points of the non-divided region in the eight neighborhoods of all the newly added pixel points and the newly added pixel points corresponding to the pixel points.
And calculating the direction angle from the pixel point of the non-divided region in the eight neighborhoods of the newly added pixel points to the initial central point, and counting the pixel points of the divided regions contained in each direction to obtain a pixel point sequence in each direction.
And obtaining a pixel point texture vector sequence in each direction according to the pixel point sequence in each direction.
And calculating to obtain the distribution of each pixel point in the texture vector sequence in each direction according to the texture brightness, the texture definition and the background brightness in the texture vector sequence.
And obtaining the texture vector change condition of the pixel points in each direction according to the distribution of each pixel point in the texture vector sequence in each direction.
And obtaining the consistency rate of the pixel points of the non-divided regions in the eight neighborhoods of all the newly added pixel points and the region where the initial central point is located according to the consistency rate of all the newly added pixel points, the texture vector change condition of the pixel points in each direction and the similarity between the pixel points of the non-divided regions in the eight neighborhoods of all the newly added pixel points and the newly added pixel points corresponding to the pixel points.
Further, according to the paper cup raw material quality detection method based on computer vision, the expression of the coincidence rate of pixel points of the undivided region in eight neighborhoods of all the newly added pixel points and the region where the initial center point is located is as follows:
Figure 100002_DEST_PATH_IMAGE028
in the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE030
is as follows
Figure 100002_DEST_PATH_IMAGE032
In turn add
Figure 100002_DEST_PATH_IMAGE034
Eight neighborhoods of each pixel point
Figure 100002_DEST_PATH_IMAGE036
Pixel point of non-divided region
Figure 100002_DEST_PATH_IMAGE038
The rate of coincidence with the area where the initial center point is located,
Figure 100002_DEST_PATH_IMAGE040
is as follows
Figure 753530DEST_PATH_IMAGE032
In turn add
Figure 444361DEST_PATH_IMAGE034
Each pixel point
Figure 100002_DEST_PATH_IMAGE042
The rate of the consistency of (a) to (b),
Figure 100002_DEST_PATH_IMAGE044
is as follows
Figure 482724DEST_PATH_IMAGE032
The consistency rate sequence of the newly added pixel points is generated,
Figure 100002_DEST_PATH_IMAGE046
is the maximum value in the sequence of the coincidence rate,
Figure 100002_DEST_PATH_IMAGE048
is a pixel point
Figure 554585DEST_PATH_IMAGE042
And pixel point
Figure 198056DEST_PATH_IMAGE038
The degree of similarity of (a) to (b),
Figure 100002_DEST_PATH_IMAGE050
in order to be a sequence of variations of the texture vector,
Figure 100002_DEST_PATH_IMAGE052
for the direction in which the texture vector changes the least,
Figure 100002_DEST_PATH_IMAGE054
is as follows
Figure 80693DEST_PATH_IMAGE052
The first direction distributed with the pixel points
Figure 100002_DEST_PATH_IMAGE056
Distributed in one direction
Figure 100002_DEST_PATH_IMAGE058
The divergence of the light beam is measured by the light source,
Figure 100002_DEST_PATH_IMAGE060
is as follows
Figure 606352DEST_PATH_IMAGE060
In one direction, the direction of the first and the second direction,
Figure 100002_DEST_PATH_IMAGE062
as a number of all the directions,
Figure 100002_DEST_PATH_IMAGE064
is as follows
Figure 481904DEST_PATH_IMAGE052
One direction and all directions
Figure 714302DEST_PATH_IMAGE058
Sum of divergence.
Further, according to the paper cup raw material quality detection method based on computer vision, the process of evaluating the quality of the PE laminating paper is as follows:
and obtaining a pixel point sequence with equal distance from the pixel point in each laminating thickness area to the initial central point according to the distance from the pixel point in each laminating thickness area to the initial central point.
And obtaining the laminating difference between the 1 st laminating thickness area and each of the rest laminating thickness areas according to the texture brightness difference between all pixel points in the pixel point sequence in the 1 st laminating thickness area and corresponding pixel points in each of the rest laminating thickness areas.
And sequencing the lamination differences to obtain the area with the thinnest lamination thickness.
And calculating the occupation ratio of the area with the thinnest lamination thickness in the lamination paper.
And according to the number of the areas with different film coating thicknesses and the ratio of the area with the thinnest film coating thickness in the film coating paper, performing quality evaluation on the film coating paper.
The invention has the beneficial effects that:
the method comprises the steps of analyzing images of the PE laminating paper to obtain texture vectors of pixel points, iteratively combining the pixel points in the images according to texture characteristics of each pixel point and eight neighborhood pixel points of each pixel point to obtain regions with different laminating thicknesses, and finally evaluating the quality of the PE laminating paper according to the regions with different laminating thicknesses.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a paper cup raw material quality detection method provided by an embodiment of the invention;
fig. 2 is a schematic flow chart of a paper cup raw material quality detection method provided by an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The embodiment of the invention provides a paper cup raw material quality detection method based on computer vision, which is shown in figure 1 and comprises the following steps:
s1: and obtaining the gray level image of the PE laminating paper.
The gray scale map is also called a gray scale map. The relationship between white and black is logarithmically divided into several levels, called gray scale. The gray scale is divided into 256 steps.
S2: and acquiring the texture vector of each pixel point in the gray image according to the gray value of the pixel point.
Wherein, the texture vector is obtained by texture brightness, texture definition and background color brightness.
S3: obtaining a 1 st thickness region comprising:
s301: and selecting the pixel point with the maximum texture brightness in the texture vector as an initial central point, and calculating to obtain the consistency ratio of the pixel point in the eight neighborhoods of the initial central point and the region where the initial central point is located.
The greater the similarity between the pixel point and the initial central point is, the greater the consistency rate of the pixel point is.
S302: and merging the pixels in the eight neighborhoods of the initial central point according to the consistency rate to obtain all newly added pixels and residual pixels.
The greater the coincidence rate is, the greater the probability that the pixel point is consistent with the initial central point lamination thickness is.
S303: and respectively taking each newly added pixel point as a central point, and calculating the consistency ratio of the pixel points of the undivided region in the eight neighborhoods of all the newly added pixel points to the region where the initial central point is located.
The more the consistency rate of the pixel points in the non-divided region in the eight neighborhoods of the newly added pixel points is, the more the possibility that the film thickness of the pixel points is consistent with that of the region where the initial pixel points are located is.
S304: and merging the pixels in the non-divided regions in the eight neighborhoods of the newly added pixels according to the consistency ratio of the pixels in the non-divided regions in the eight neighborhoods of all the newly added pixels to the region in which the initial center point is located, so as to obtain all the newly added pixels and the residual pixels.
Wherein, the merging manner is consistent with S302.
S305: and continuously repeating the steps S303-S304 until no new pixel points are added, and obtaining a 1 st thickness area and residual pixel points.
And the pixel points without the newly added pixel points indicating the same film thickness characteristic are classified into one area.
S4: and selecting the pixel point with the maximum texture brightness from the residual pixel points as a new initial central point, obtaining a second thickness area and the residual pixel points according to the method for obtaining the 1 st thickness area, and obtaining all areas with different laminating thicknesses after all the pixel points are divided.
And obtaining all the areas with different film coating thicknesses according to the texture vector characteristics of the pixel points.
S5: and evaluating the quality of the PE laminating paper according to different laminating thickness areas.
And according to the number of the areas with different film coating thicknesses and the ratio of the area with the thinnest film coating thickness in the film coating paper, performing quality evaluation on the film coating paper.
The beneficial effect of this embodiment lies in:
the PE laminating paper image is analyzed to obtain texture vectors of the pixel points, the pixel points in the image are combined in an iteration mode according to texture characteristics of each pixel point and eight neighborhood pixel points of each pixel point to obtain regions with different laminating thicknesses, and finally quality evaluation is conducted on the PE laminating paper according to the regions with different laminating thicknesses.
Example 2
The main purposes of the invention are: and processing the acquired PE laminating paper image by using computer vision, dividing the areas with different laminating thicknesses of the PE laminating paper, and evaluating the quality of the PE laminating paper.
In order to realize the content, the invention designs a method for analyzing whether the PE laminating paper laminating thickness is consistent through computer vision, thereby carrying out the quality evaluation of the PE laminating paper.
The embodiment of the invention provides a paper cup raw material quality detection method based on computer vision, which is shown in figure 2 and comprises the following steps:
the method comprises the following steps: and obtaining the gray level image of the PE laminating paper.
Before the paper cup is manufactured, the quality of the PE film coated paper which is used as the paper cup raw material needs to be detected. The PE coated paper image is analyzed, and areas with different coating thicknesses of the PE coated paper are segmented, so that the quality of the PE coated paper is evaluated.
In this embodiment, the area needs to be divided according to the image characteristics of the PE coated paper, so the image of the PE coated paper needs to be collected first. Erect single light source directly over PE drenches membrane paper, place the camera directly over PE drenches membrane paper, shoot PE drenches membrane paper image, only contain PE in the image and drench membrane paper, do not contain other regions.
In order to facilitate subsequent analysis, the PE laminating paper image is converted into a gray image.
Step two: and according to the image characteristics of the PE laminating paper, performing region segmentation, and segmenting the positions with different laminating thicknesses into different regions.
The PE film coated paper is a composite material formed by coating plastic particles on the surface of paper through a casting machine, and if a die orifice of the casting machine is blocked or abraded, the PE film coated paper is uneven in film coating, and the phenomenon that the film coating thickness of a partial area is thick and the film coating thickness of the partial area is thin is caused. The surface of the paper has fine textures, the textures are blurred due to the fact that the reflectivity of the paper to light is small, however, after the surface of the paper is coated with the transparent plastic particles, the reflectivity of the paper to light is increased, and the fine textures on the paper are clear under the reflection effect of the light. The area with thick film has larger light reflectivity than the area with thin film, so that the area with thick film has brighter light and clearer texture than the area with thin film under the same illumination intensity. Under the irradiation of a single light source above the PE laminating paper, the area with the same laminating thickness has the characteristics of bright middle of the image, clearer texture, dark periphery of the image and fuzzy texture.
And analyzing the image characteristics of the PE coated paper to obtain the characteristics of texture definition, texture brightness, background brightness and the like of each pixel point, dividing the positions with inconsistent coating thickness into different areas according to the characteristics, obtaining the number of the areas with inconsistent coating thickness, and evaluating the quality of the PE coated paper.
1. And acquiring a pixel point texture vector.
The illumination of the same local area on the image is unchanged, the highlight in the area is texture, and the darker point is ground color. If the gray value difference between the texture and the ground color is larger, the texture of the area is clearer. Otherwise, if the texture is blurred. Calculating the texture definition according to the characteristics:
with a pixel point on the image as the center, construct a
Figure DEST_PATH_IMAGE066
The window of (2) obtains the gray value sequence of all pixel points in the window, analyzes the gray value sequence, and obtains the upper quartile and the lower quartile in the sequence. In order to reduce the interference of noise points, the mean value of the data (including the maximum value and the upper quartile) between the maximum value and the upper quartile of the sequence is taken as the texture brightness of the central pixel point of the window
Figure DEST_PATH_IMAGE068
Taking the average value of the data (including the minimum value and the lower quartile) between the minimum value and the lower quartile of the sequence as the background color brightness of the central pixel point of the window
Figure DEST_PATH_IMAGE070
. And calculating the difference value between the texture brightness and the background brightness to serve as the texture definition b of the central pixel point of the window.
Similarly, the texture definition of each pixel point on the image is obtained
Figure DEST_PATH_IMAGE072
Brightness of texture
Figure DEST_PATH_IMAGE074
And brightness of ground color
Figure DEST_PATH_IMAGE076
Obtaining the texture vector of each pixel point
Figure DEST_PATH_IMAGE078
. If the texture definition is larger, the possibility that the corresponding pixel point is in the area with large film thickness is larger, and if the texture brightness is larger, the possibility that the corresponding pixel point is in the area with strong illumination is larger.
If the texture brightness of the pixel point is larger, the pixel point may be an area with high illumination intensity. And selecting a point with the maximum texture brightness as an initial central point, and if the texture brightness is maximum, selecting the central points of the points as the initial central points to perform region segmentation. The specific steps of region segmentation are as follows:
2. and calculating the similarity between the pixel points of the undivided region in the eight neighborhoods of the initial central point and the initial central point.
A single light source above the PE laminating paper enables the image to have the characteristics of bright middle and dark periphery, but in a very small area range, the illumination change can be ignored, namely, the local illumination invariance. The initial central point and the point in the eight neighborhoods thereof are under the same local illumination. The initial central point is the first point in the image
Figure DEST_PATH_IMAGE080
A pixel point, mark as
Figure 594272DEST_PATH_IMAGE010
The first of the eight neighborhoods of an undivided region
Figure 810489DEST_PATH_IMAGE006
Each pixel point is the first in the image
Figure DEST_PATH_IMAGE082
A pixel point, mark as
Figure 958574DEST_PATH_IMAGE008
Calculating
Figure 327369DEST_PATH_IMAGE010
Texture vector of and
Figure 535497DEST_PATH_IMAGE008
as points, cosine similarity between the texture vectors of
Figure 239011DEST_PATH_IMAGE010
And
Figure 925207DEST_PATH_IMAGE008
degree of similarity of
Figure 397777DEST_PATH_IMAGE012
. Calculate the beginningThe similarity between the pixel point of each non-divided area in the eight neighborhoods of the initial central point and the initial central point is obtained to obtain a similarity sequence
Figure DEST_PATH_IMAGE084
Figure 42385DEST_PATH_IMAGE022
The number of pixel points in the undivided region in the eight neighborhoods of the initial central point is shown. If the similarity is higher, the texture definition, the texture brightness and the ground color brightness of the two points are closer. If the similarity is lower, the difference between the texture definition, the texture brightness and the background brightness of the two points is larger.
3. And calculating the consistency ratio of the pixel points of the undivided region in the eight neighborhoods to the region where the initial central point is located.
The initial center point is the point with the maximum texture brightness, and here is the area with the strongest illumination. And point illumination attenuation around the initial central point is realized, and the point illumination attenuation speeds in all directions of the initial central point are consistent. Comparing initial center points
Figure 967615DEST_PATH_IMAGE010
The first of the eight neighborhoods is an undivided region
Figure 457502DEST_PATH_IMAGE006
Each pixel point
Figure 50158DEST_PATH_IMAGE008
Calculating the consistency ratio of the pixel point and the area of the initial central point according to the change conditions of the points in the eight neighborhoods of the initial central point and other directions
Figure 583776DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE002A
Wherein
Figure 527461DEST_PATH_IMAGE012
Is a pixel point
Figure 821039DEST_PATH_IMAGE008
And the initial center point
Figure 2622DEST_PATH_IMAGE010
The similarity of (2);
Figure 723453DEST_PATH_IMAGE014
is the initial center point of eight neighborhoods
Figure 623276DEST_PATH_IMAGE016
The serial numbers of the pixel points of the non-divided areas in the image;
Figure 454966DEST_PATH_IMAGE018
is a pixel point
Figure 22214DEST_PATH_IMAGE020
And the initial center point
Figure 648367DEST_PATH_IMAGE010
The similarity of (2);
Figure 786218DEST_PATH_IMAGE022
the number of pixel points in an undivided region in the eight neighborhoods of the initial central point is the number of the pixel points in the undivided region in the eight neighborhoods of the initial central point;
Figure 421599DEST_PATH_IMAGE024
is a similarity sequence;
Figure DEST_PATH_IMAGE086
representing degree of similarity
Figure 843353DEST_PATH_IMAGE012
Difference from the maximum in similarity sequence.
Figure 905987DEST_PATH_IMAGE026
Is the maximum value in the similarity sequence.
The greater the similarity between the pixel point and the initial central point is, the greater the consistency rate of the pixel point is. The greater the difference between the similarity between the pixel point and the initial central point and the maximum similarity in the similarity sequence is, the smaller the consistency rate of the pixel point is.
4. And merging the pixel points according to the consistency rate.
If the consistency rate is higher, the probability that the pixel point is consistent with the initial central point laminating film thickness is higher. Coincidence rate of combined pixels
Figure DEST_PATH_IMAGE088
Merging pixel points:
if it is
Figure DEST_PATH_IMAGE090
And the thickness of the laminated film of the pixel point and the initial central point is consistent, and the pixel point is newly added to the area to which the initial central point belongs.
If it is
Figure DEST_PATH_IMAGE092
And the pixel point is inconsistent with the initial central point laminating film thickness, and the pixel point is not merged.
This newly added pixel point is marked as the first
Figure DEST_PATH_IMAGE094
And (6) adding pixel points in turn.
5. And respectively taking each newly added pixel point as a center, and calculating the similarity between the pixel point of the undivided region in the eight neighborhoods of the center point and the center point.
The newly added pixel point in the last step is the first
Figure 577140DEST_PATH_IMAGE032
The newly added pixel points are respectively the first one in the image
Figure DEST_PATH_IMAGE096
A pixel point, mark as
Figure DEST_PATH_IMAGE098
. First, the
Figure 265479DEST_PATH_IMAGE032
In turn add
Figure 541740DEST_PATH_IMAGE034
Each pixel point is
Figure 775275DEST_PATH_IMAGE042
The first of the eight neighborhoods of an undivided region
Figure 136986DEST_PATH_IMAGE036
Each pixel point (front)
Figure 848590DEST_PATH_IMAGE094
To
Figure 979357DEST_PATH_IMAGE032
Pixels not merged in a round are not considered in a round) are the first in the image
Figure DEST_PATH_IMAGE100
A pixel point, mark as
Figure DEST_PATH_IMAGE102
. Computing
Figure 649373DEST_PATH_IMAGE042
Texture vector of and
Figure 249113DEST_PATH_IMAGE102
as the cosine similarity between the texture vectors of
Figure 29987DEST_PATH_IMAGE042
And
Figure 15260DEST_PATH_IMAGE102
degree of similarity of
Figure 325019DEST_PATH_IMAGE048
6. And calculating the consistency ratio of the pixel points of the undivided region in the eight neighborhoods of all the newly added pixel points to the region where the initial central point is located.
The initial center point is the point with the maximum texture brightness, and here is the area with the strongest illumination. The point illumination around the initial center point decays. And the point illumination attenuation speeds in all directions of the initial central point are consistent. Analyzing the texture vector variation condition of the pixel points in each direction of the initial central point, and if the film spraying thicknesses of the pixel points in each direction of the initial central point are consistent, gradually reducing the texture definition, the texture brightness and the ground color brightness in each direction, wherein the reduction degrees in each direction are basically consistent.
Calculating the direction angle from the pixel point of an undesignated region in the eight neighborhoods of the newly added pixel point to the initial central point according to an arc tangent function, counting the pixel points which are contained in each direction and are divided into the region, obtaining a pixel point sequence in each direction, and obtaining a pixel point texture vector sequence in each direction according to the pixel point sequence in each direction: and calculating the distance from each pixel point in the pixel point sequence to the initial central point in one direction, wherein the distance value is rounded downwards and is used as the position of the corresponding pixel point in the texture vector sequence, and the texture vector of the pixel point is used as the value of the corresponding position in the texture vector sequence. Summing two texture vectors if the value between two non-adjacent texture vectors in the sequence of texture vectors is null
Figure DEST_PATH_IMAGE104
As a value between two texture vectors.
This results in a sequence of texture vectors in each direction. The texture vector sequences in all directions have the same length, all are
Figure DEST_PATH_IMAGE106
Calculating the distribution of each pixel point in the texture vector sequence according to the texture vector sequence
Figure 192481DEST_PATH_IMAGE060
Texture vector sequence of each direction
Figure DEST_PATH_IMAGE108
The distribution of each pixel is
Figure DEST_PATH_IMAGE110
:
Figure DEST_PATH_IMAGE112
Wherein
Figure DEST_PATH_IMAGE114
Is as follows
Figure 557472DEST_PATH_IMAGE060
Texture vector sequence of each direction
Figure 662831DEST_PATH_IMAGE108
The texture definition of each pixel point;
Figure DEST_PATH_IMAGE116
is as follows
Figure 409070DEST_PATH_IMAGE060
Texture vector sequence of each direction
Figure DEST_PATH_IMAGE118
The texture definition of each pixel point;
Figure 498249DEST_PATH_IMAGE106
is the length of the texture vector sequence;
Figure DEST_PATH_IMAGE120
is as follows
Figure 355347DEST_PATH_IMAGE060
Texture vector sequence of each direction
Figure 315212DEST_PATH_IMAGE108
The texture brightness of each pixel point is calculated,
Figure DEST_PATH_IMAGE122
is the texture direction of the y directionIn the quantitative series
Figure 514244DEST_PATH_IMAGE108
The background color brightness of each pixel point. Calculate the first
Figure 293981DEST_PATH_IMAGE060
The distribution of each pixel point in the texture vector sequence of each direction is normalized to obtain the distribution of the direction sequence
Figure DEST_PATH_IMAGE124
. And obtaining the distribution of each direction in the same way so as to calculate the consistency rate of the pixel points according to the distribution of each direction.
First, the
Figure 220349DEST_PATH_IMAGE032
In turn add
Figure 300300DEST_PATH_IMAGE034
Eight neighborhoods of each pixel point
Figure 122763DEST_PATH_IMAGE036
Pixel point of non-divided region
Figure 655375DEST_PATH_IMAGE038
(front)
Figure 119854DEST_PATH_IMAGE094
To
Figure 788733DEST_PATH_IMAGE032
Pixels not merged in a round are not considered in a round) and the area where the initial central point is located
Figure 47676DEST_PATH_IMAGE030
Comprises the following steps:
Figure DEST_PATH_IMAGE028A
interpretation of the formula:
Figure 848011DEST_PATH_IMAGE040
is a pixel point
Figure 585023DEST_PATH_IMAGE042
The rate of agreement of;
Figure 373987DEST_PATH_IMAGE044
is as follows
Figure 69411DEST_PATH_IMAGE032
The consistency rate sequence of the newly added pixel points is generated; the consistency rate sequence is a sequence obtained by the consistency rate corresponding to each new round of pixel points.
Figure 576615DEST_PATH_IMAGE046
Is the maximum value in the sequence of the coincidence rates;
Figure 851739DEST_PATH_IMAGE048
is a pixel point
Figure 760789DEST_PATH_IMAGE042
And pixel point
Figure 361535DEST_PATH_IMAGE038
The similarity of (c).
Figure 106768DEST_PATH_IMAGE050
For the texture vector change sequence:
Figure DEST_PATH_IMAGE126
Figure DEST_PATH_IMAGE128
is as follows
Figure 716741DEST_PATH_IMAGE060
Texture vector variation for each direction:
get first
Figure 480298DEST_PATH_IMAGE060
Respectively normalizing the last 5 pixel points in each direction and the reciprocal 1-6 pixel points to form two new sequences
Figure DEST_PATH_IMAGE130
And
Figure DEST_PATH_IMAGE132
calculating the two sequences
Figure 783103DEST_PATH_IMAGE058
Divergence for representing the variation of texture vector of pixel points in the direction
Figure 999321DEST_PATH_IMAGE128
Figure DEST_PATH_IMAGE134
Similarly, calculating the texture vector change condition in all directions to obtain a texture vector change sequence
Figure DEST_PATH_IMAGE136
. Wherein the direction in which the texture vector changes the least is
Figure 927831DEST_PATH_IMAGE052
According to the scheme, by analyzing the change condition of the local texture vector in a certain direction, the illumination attenuation degree is extremely low in a local range, and if the change of the local texture vector in the direction is large, a place with inconsistent film coating thickness may exist in the local range. If the local texture vector change in the direction is small, the lamination thickness in the local direction is consistent. Direction of least change with texture vector
Figure 545894DEST_PATH_IMAGE052
The distribution of (A) is used as a standard distribution of the uniform thickness of the drenched film.
Figure 488443DEST_PATH_IMAGE054
Is as follows
Figure 191956DEST_PATH_IMAGE052
The first direction distributed with the pixel points
Figure 878153DEST_PATH_IMAGE056
Distributed in one direction
Figure 616301DEST_PATH_IMAGE058
The divergence (relative entropy),
Figure 995330DEST_PATH_IMAGE058
the greater the divergence, the
Figure 186140DEST_PATH_IMAGE056
In a direction of
Figure DEST_PATH_IMAGE138
The greater the degree of difference in the individual directions;
Figure 426760DEST_PATH_IMAGE058
the smaller the divergence, the
Figure 753836DEST_PATH_IMAGE056
In a direction of
Figure 303766DEST_PATH_IMAGE138
The smaller the degree of difference in the individual directions.
Figure 716293DEST_PATH_IMAGE060
Is as follows
Figure 9871DEST_PATH_IMAGE060
In one direction, the direction of the first and the second direction,
Figure 457033DEST_PATH_IMAGE062
as a number of all the directions,
Figure 912285DEST_PATH_IMAGE064
is as follows
Figure 812108DEST_PATH_IMAGE052
One direction and all directions
Figure 643797DEST_PATH_IMAGE058
Sum of divergence.
If it is
Figure 211045DEST_PATH_IMAGE052
Direction and
Figure 86466DEST_PATH_IMAGE056
in a direction of
Figure 739164DEST_PATH_IMAGE058
The greater the divergence, the greater the
Figure DEST_PATH_IMAGE140
The smaller the size, the larger the size otherwise.
Fourthly if
Figure 640124DEST_PATH_IMAGE042
And
Figure 530720DEST_PATH_IMAGE038
the greater the similarity and
Figure 593354DEST_PATH_IMAGE042
the greater the rate of coincidence and
Figure DEST_PATH_IMAGE142
the smaller the pixel point
Figure 733348DEST_PATH_IMAGE038
The larger the coincidence rate is, the higher the possibility that the film thickness of the pixel point is consistent with that of the region where the initial pixel point is located is.
7. And merging the pixel points according to the consistency rate.
If the consistency rate is higher, the probability that the pixel point is consistent with the initial central point laminating film thickness is higher. Combining the consistent rate of the pixel points, merging the pixel points:
if it is
Figure 906840DEST_PATH_IMAGE090
And the thickness of the laminated film of the pixel point and the initial central point is consistent, and the pixel point is newly added to the area to which the initial central point belongs.
If it is
Figure 183101DEST_PATH_IMAGE092
And the pixel point is inconsistent with the initial central point laminating film thickness, and the pixel point is not merged.
The number of turns of the newly added pixel point is the number of turns of the previous turn plus
Figure 167369DEST_PATH_IMAGE094
8. And (5) repeating the steps 5-7 until no new pixel points are added, and obtaining the region 1.
9. And selecting the point with the maximum brightness from the residual pixel points as an initial central point.
10. And (3) repeating the steps 2-9, obtaining a 2 nd area according to the method for obtaining the 1 st area, obtaining a 3 rd area according to the method for obtaining the 2 nd area, and sequentially till all pixel points are divided.
Figure DEST_PATH_IMAGE144
Set manually, the empirical value is 0.85. Therefore, region segmentation is completed, and pixel points with different film coating thicknesses belong to different regions. In total
Figure DEST_PATH_IMAGE146
And (4) a region.
Step three: and (4) according to the areas with different laminating thicknesses, performing quality evaluation on the PE laminating paper.
a. And obtaining the laminating difference degree of each area.
In order to measure the difference of the thickness of the laminated film in each area, the texture vectors of the pixel points in each area need to be compared under the same illumination condition.
Step for calculating each pixel point in each region
Figure DEST_PATH_IMAGE148
And (3) forming a distance set of each region by the distance of the first initial central point, solving an intersection set of the distance sets of each region, wherein each distance value in the obtained intersection corresponds to one pixel point in each region, and the distances from the pixel points to the initial central points are equal, so that the illumination intensities of the pixel points are considered to be consistent. All distance values in the intersection correspond to a sequence of pixel points in each region. Comparing pixel point sequences of each region, calculating the first region and the second region
Figure 60238DEST_PATH_IMAGE146
The difference of the lamination of each area: calculate the first
Figure 37421DEST_PATH_IMAGE094
All pixel points in the pixel point sequence of each region and the first
Figure 168189DEST_PATH_IMAGE146
Taking the mean value of the texture brightness difference values of the corresponding pixel points in each region as the second
Figure 307046DEST_PATH_IMAGE094
An area and a
Figure 156053DEST_PATH_IMAGE146
The difference of the lamination of each area. And similarly, calculating the lamination difference between the 1 st area and all the other areas, and sequencing according to the lamination difference from large to small (positive to negative), wherein the area with the largest lamination difference is the area with the smallest lamination thickness.
b. The laminated paper was evaluated.
The difference of the lamination between the 1 st area and the rest areas is obtained in the last step, and the area with the largest lamination difference is the area with the thinnest lamination thickness. If a die orifice of the die head of the casting machine is blocked, the film coating of the local area of the film coating paper is thin; if the die orifice of the die head part of the casting machine is worn, the local area of the film coating paper is coated with the film thickness. Drench membraneThe area with the thinnest thickness can be an area with insufficient film thickness caused by the blockage of a part of die orifices of a die head of the casting machine, and can also be an area with the standard thickness of the film sprayed on the rest normal die orifices when the part of die orifices are worn. The dies are individually worn regardless of the blockage of the dies, and the thickness of the sprayed film is too thin or too thick and is only a small area. Calculating the ratio of the area with the thinnest thickness of the laminated film in the laminated paper
Figure DEST_PATH_IMAGE150
Wherein
Figure DEST_PATH_IMAGE152
The number of the pixel points of the image of the laminating paper,
Figure DEST_PATH_IMAGE154
the number of the pixel points in the area with the thinnest film thickness is the same as the number of the pixel points in the area with the thinnest film thickness. And (3) carrying out quality evaluation on the PE laminating paper by combining the number of the areas and the ratio of the area with the thinnest laminating thickness:
if it is
Figure DEST_PATH_IMAGE156
The lamination is uniform, the lamination thickness meets the requirement, and the lamination paper has excellent quality.
If it is
Figure DEST_PATH_IMAGE158
If the film is not uniform, the film thickness meets the requirement, and the film quality is medium.
If it is
Figure DEST_PATH_IMAGE160
If the thickness of the laminated film does not meet the requirement, the laminated film quality is not qualified, and the paper cup produced by using the laminated paper may have the problem of water leakage.
Figure DEST_PATH_IMAGE162
Given by the human, the empirical value is 0.2.
The beneficial effect of this embodiment lies in:
the PE laminating paper image is analyzed to obtain texture vectors of the pixel points, the pixel points in the image are combined in an iteration mode according to texture characteristics of each pixel point and eight neighborhood pixel points of each pixel point to obtain regions with different laminating thicknesses, and finally quality evaluation is conducted on the PE laminating paper according to the regions with different laminating thicknesses.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A paper cup raw material quality detection method based on computer vision is characterized by comprising the following steps:
s1: obtaining a gray level image of PE laminating paper;
s2: acquiring a texture vector of each pixel point in the gray image according to the gray value of the pixel point;
s3: obtaining a 1 st thickness region comprising:
s301: selecting a pixel point with the maximum texture brightness in the texture vector as an initial central point, and calculating to obtain the consistency ratio of the pixel point in the eight neighborhoods of the initial central point and the region where the initial central point is located;
s302: merging the pixel points in the eight neighborhoods of the initial central point according to the consistency rate to obtain all newly added pixel points and residual pixel points;
s303: respectively taking each newly added pixel point as a central point, and calculating the consistency rate of the pixel points of the undivided region in eight neighborhoods of all the newly added pixel points and the region where the initial central point is located;
s304: combining the pixels of the unsegmented regions in the eight neighborhoods of the newly added pixels according to the consistency ratio of the pixels of the unsegmented regions in the eight neighborhoods of the newly added pixels and the region where the initial center point is located to obtain all newly added pixels and residual pixels;
s305: continuously repeating the steps S303-S304 until no new pixel points are added, and obtaining a 1 st thickness area and residual pixel points;
s4: selecting the pixel point with the largest texture brightness from the residual pixel points as a new initial central point, obtaining a 2 nd thickness area and the residual pixel points according to the method for obtaining the 1 st thickness area, and sequentially until all the pixel points are divided, so as to obtain all areas with different laminating thicknesses;
s5: and evaluating the quality of the PE laminating paper according to different laminating thickness areas.
2. The paper cup raw material quality detection method based on computer vision according to claim 1, characterized in that the texture vector of each pixel point in the gray image is obtained as follows:
setting the size of a sliding window, and carrying out sliding window detection on the gray level image to obtain a gray level sequence of all pixel points in each window;
acquiring an upper quartile, a lower quartile, a maximum value and a minimum value in the gray value sequence;
calculating the mean value of gray values between the maximum value and the upper quartile in the gray value sequence to obtain the texture brightness of the central pixel point of each window;
calculating the mean value of gray values between the minimum value and the lower quartile in the gray value sequence to obtain the background color brightness of the central pixel point of each window;
calculating the difference value between the texture brightness and the background brightness to obtain the texture definition of the central pixel point of each window;
and taking the texture brightness, the background brightness and the texture definition as three components of the texture vector to obtain the texture vector of each pixel point in the gray level image.
3. The paper cup raw material quality detection method based on computer vision as claimed in claim 1, wherein the process of obtaining the coincidence rate of pixel points in the eight neighborhoods of the initial center point and the region of the initial center point is as follows:
calculating cosine similarity between the texture vectors of the initial central point and the pixel points in the eight neighborhoods of the initial central point to obtain similarity and similarity sequences of the initial central point and the pixel points in the eight neighborhoods of the initial central point;
calculating the consistency ratio of the pixel points in the eight neighborhoods of the initial central point and the area where the initial central point is located according to the similarity and the similarity sequence, wherein the expression of the consistency ratio is as follows:
Figure DEST_PATH_IMAGE002
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE004
is the initial center point of eight neighborhoods
Figure DEST_PATH_IMAGE006
Each pixel point
Figure DEST_PATH_IMAGE008
And the initial center point
Figure DEST_PATH_IMAGE010
The rate of coincidence of the areas in which they are located,
Figure DEST_PATH_IMAGE012
is a pixel point
Figure 499383DEST_PATH_IMAGE008
And the initial center point
Figure 791824DEST_PATH_IMAGE010
The degree of similarity of (a) to (b),
Figure DEST_PATH_IMAGE014
is the initial center point of eight neighborhoods
Figure DEST_PATH_IMAGE016
The serial number of each pixel point in the image,
Figure DEST_PATH_IMAGE018
is a pixel point
Figure DEST_PATH_IMAGE020
And the initial center point
Figure 570293DEST_PATH_IMAGE010
The degree of similarity of (a) to (b),
Figure DEST_PATH_IMAGE022
the number of pixel points in the eight neighborhoods of the initial central point,
Figure DEST_PATH_IMAGE024
is a sequence of the degree of similarity,
Figure DEST_PATH_IMAGE026
is the maximum value in the similarity sequence.
4. The paper cup raw material quality detection method based on computer vision as claimed in claim 1, wherein the process of merging the pixels in eight neighborhoods of the initial center point is as follows:
setting a threshold value, and judging the relation between the consistency rate of each pixel point in the eight neighborhoods of the initial central point and the area where the initial central point is positioned and the threshold value;
when the consistency rate of each pixel point in the eight neighborhoods of the initial central point and the area where the initial central point is located is larger than a threshold value, merging the pixel point into the area where the initial central point belongs;
and when the consistency rate of each pixel point in the eight neighborhoods of the initial central point and the region where the initial central point is located is not greater than the threshold value, the pixel points are not merged.
5. The paper cup raw material quality detection method based on computer vision of claim 1, wherein the consistency ratio of the pixel points of the undivided region in eight neighborhoods of all the newly added pixel points to the region where the initial center point is located is obtained as follows:
calculating the similarity between the pixel points of the non-divided region in the eight neighborhoods of all the newly added pixel points and the newly added pixel points corresponding to the pixel points;
calculating the direction angle from the pixel point of the non-divided region in the eight neighborhoods of the newly added pixel points to the initial central point, and counting the pixel points of the divided regions contained in each direction to obtain a pixel point sequence in each direction;
obtaining a pixel point texture vector sequence in each direction according to the pixel point sequence in each direction;
calculating to obtain the distribution of each pixel point in the texture vector sequence in each direction according to the texture brightness, the texture definition and the ground color brightness in the texture vector sequence;
obtaining the texture vector change condition of the pixel points in each direction according to the distribution of each pixel point in the texture vector sequence in each direction;
and obtaining the consistency rate of the pixel points of the non-divided regions in the eight neighborhoods of all the newly added pixel points and the region where the initial central point is located according to the consistency rate of all the newly added pixel points, the texture vector change condition of the pixel points in each direction and the similarity between the pixel points of the non-divided regions in the eight neighborhoods of all the newly added pixel points and the newly added pixel points corresponding to the pixel points.
6. The paper cup raw material quality detection method based on computer vision according to claim 1 or 5, characterized in that the expression of the coincidence rate of the pixel points of the undivided region in eight neighborhoods of all the newly added pixel points and the region of the initial center point is as follows:
Figure DEST_PATH_IMAGE028
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE030
is as follows
Figure DEST_PATH_IMAGE032
In turn add
Figure DEST_PATH_IMAGE034
Eight neighborhoods of each pixel point
Figure DEST_PATH_IMAGE036
Pixel point of non-divided region
Figure DEST_PATH_IMAGE038
The rate of coincidence with the area where the initial center point is located,
Figure DEST_PATH_IMAGE040
is as follows
Figure 123635DEST_PATH_IMAGE032
In turn add
Figure 509616DEST_PATH_IMAGE034
Each pixel point
Figure DEST_PATH_IMAGE042
The rate of the consistency of (a) to (b),
Figure DEST_PATH_IMAGE044
is as follows
Figure 7463DEST_PATH_IMAGE032
The consistency rate sequence of the newly added pixel points is generated,
Figure DEST_PATH_IMAGE046
is the maximum value in the sequence of the coincidence rate,
Figure DEST_PATH_IMAGE048
is a pixel point
Figure 871514DEST_PATH_IMAGE042
And pixel point
Figure 420307DEST_PATH_IMAGE038
The degree of similarity of (a) to (b),
Figure DEST_PATH_IMAGE050
in order to be a sequence of variations of the texture vector,
Figure DEST_PATH_IMAGE052
for the direction in which the texture vector changes the least,
Figure DEST_PATH_IMAGE054
is as follows
Figure 446031DEST_PATH_IMAGE052
The first direction distributed with the pixel points
Figure DEST_PATH_IMAGE056
Distributed in one direction
Figure DEST_PATH_IMAGE058
The divergence of the light beam is measured by the light source,
Figure DEST_PATH_IMAGE060
is as follows
Figure 165595DEST_PATH_IMAGE060
In one direction, the direction of the first and the second direction,
Figure DEST_PATH_IMAGE062
as a number of all the directions,
Figure DEST_PATH_IMAGE064
is as follows
Figure 567757DEST_PATH_IMAGE052
One direction and all directions
Figure 236636DEST_PATH_IMAGE058
Sum of divergence.
7. The method for detecting the quality of the paper cup raw material based on the computer vision as claimed in claim 1, wherein the process of evaluating the quality of the PE laminating paper is as follows:
obtaining pixel point sequences with equal distances from the pixel points in each laminating thickness area to the initial central point according to the distances from the pixel points in each laminating thickness area to the initial central point;
obtaining the lamination difference between the 1 st lamination thickness area and each of the rest lamination thickness areas according to the texture brightness difference between all pixel points in the pixel point sequence in the 1 st lamination thickness area and corresponding pixel points in each of the rest lamination thickness areas;
sorting the lamination differences to obtain an area with the thinnest lamination thickness;
calculating the proportion of the area with the thinnest film thickness in the film laminating paper;
and according to the number of the areas with different film coating thicknesses and the ratio of the area with the thinnest film coating thickness in the film coating paper, performing quality evaluation on the film coating paper.
CN202111570567.9A 2021-12-21 2021-12-21 Paper cup raw material quality detection method based on computer vision Active CN113962993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111570567.9A CN113962993B (en) 2021-12-21 2021-12-21 Paper cup raw material quality detection method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111570567.9A CN113962993B (en) 2021-12-21 2021-12-21 Paper cup raw material quality detection method based on computer vision

Publications (2)

Publication Number Publication Date
CN113962993A true CN113962993A (en) 2022-01-21
CN113962993B CN113962993B (en) 2022-03-15

Family

ID=79473450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111570567.9A Active CN113962993B (en) 2021-12-21 2021-12-21 Paper cup raw material quality detection method based on computer vision

Country Status (1)

Country Link
CN (1) CN113962993B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359416A (en) * 2022-03-17 2022-04-15 山东水利建设集团有限公司 Building outer wall hollowing leakage abnormity detection and positioning method
CN116958134A (en) * 2023-09-19 2023-10-27 青岛伟东包装有限公司 Plastic film extrusion quality evaluation method based on image processing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030107736A1 (en) * 2001-12-07 2003-06-12 Dainippon Screen Mfg. Co., Ltd. Apparatus for inspecting pattern on semiconductor substrate
CN102013015A (en) * 2010-12-02 2011-04-13 南京大学 Object-oriented remote sensing image coastline extraction method
CN108052945A (en) * 2017-12-11 2018-05-18 奕响(大连)科技有限公司 A kind of similar determination method of improved pictures of LBP
CN110322447A (en) * 2018-03-30 2019-10-11 张�杰 Picture element acquisition methods based on Region Segmentation Algorithm
CN110766679A (en) * 2019-10-25 2020-02-07 普联技术有限公司 Lens contamination detection method and device and terminal equipment
CN110889821A (en) * 2018-08-17 2020-03-17 波音公司 Apparatus and method for shot peening evaluation
CN111292239A (en) * 2020-01-21 2020-06-16 天目爱视(北京)科技有限公司 Three-dimensional model splicing equipment and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030107736A1 (en) * 2001-12-07 2003-06-12 Dainippon Screen Mfg. Co., Ltd. Apparatus for inspecting pattern on semiconductor substrate
CN102013015A (en) * 2010-12-02 2011-04-13 南京大学 Object-oriented remote sensing image coastline extraction method
CN108052945A (en) * 2017-12-11 2018-05-18 奕响(大连)科技有限公司 A kind of similar determination method of improved pictures of LBP
CN110322447A (en) * 2018-03-30 2019-10-11 张�杰 Picture element acquisition methods based on Region Segmentation Algorithm
CN110889821A (en) * 2018-08-17 2020-03-17 波音公司 Apparatus and method for shot peening evaluation
CN110766679A (en) * 2019-10-25 2020-02-07 普联技术有限公司 Lens contamination detection method and device and terminal equipment
CN111292239A (en) * 2020-01-21 2020-06-16 天目爱视(北京)科技有限公司 Three-dimensional model splicing equipment and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
胡焦 等: "基于像素相似性与Graph⁃Cut 的图像自动分割", 《现代电子技术》 *
詹毅等: "图像插值的自适应邻域滤波方法", 《计算机工程》 *
计算机视觉LIFE: "最全综述 | 图像分割算法", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/70758906》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359416A (en) * 2022-03-17 2022-04-15 山东水利建设集团有限公司 Building outer wall hollowing leakage abnormity detection and positioning method
CN114359416B (en) * 2022-03-17 2022-06-07 山东水利建设集团有限公司 Building outer wall hollowing leakage abnormity detection and positioning method
CN116958134A (en) * 2023-09-19 2023-10-27 青岛伟东包装有限公司 Plastic film extrusion quality evaluation method based on image processing
CN116958134B (en) * 2023-09-19 2023-12-19 青岛伟东包装有限公司 Plastic film extrusion quality evaluation method based on image processing

Also Published As

Publication number Publication date
CN113962993B (en) 2022-03-15

Similar Documents

Publication Publication Date Title
CN113962993B (en) Paper cup raw material quality detection method based on computer vision
CN115311292B (en) Strip steel surface defect detection method and system based on image processing
CN104794491B (en) Based on the fuzzy clustering Surface Defects in Steel Plate detection method presorted
CN110097034B (en) Intelligent face health degree identification and evaluation method
US10198821B2 (en) Automated tattoo recognition techniques
CN115082683A (en) Injection molding defect detection method based on image processing
CN117011292B (en) Method for rapidly detecting surface quality of composite board
CN116030060B (en) Plastic particle quality detection method
CN117095009B (en) PVC decorative plate defect detection method based on image processing
CN116758083A (en) Quick detection method for metal wash basin defects based on computer vision
CN116542972B (en) Wall plate surface defect rapid detection method based on artificial intelligence
CN111192273A (en) Digital shot blasting coverage rate measuring method based on computer vision technology
CN116523923B (en) Battery case defect identification method
CN110288618B (en) Multi-target segmentation method for uneven-illumination image
CN116883408B (en) Integrating instrument shell defect detection method based on artificial intelligence
CN115861307B (en) Fascia gun power supply driving plate welding fault detection method based on artificial intelligence
CN117237646B (en) PET high-temperature flame-retardant adhesive tape flaw extraction method and system based on image segmentation
CN117689655B (en) Metal button surface defect detection method based on computer vision
CN115984272A (en) Semitrailer axle defect identification method based on computer vision
CN115100206A (en) Printing defect identification method for textile with periodic pattern
CN112288010A (en) Finger vein image quality evaluation method based on network learning
CN115908362A (en) Method for detecting wear resistance of skateboard wheel
CN113743421B (en) Method for segmenting and quantitatively analyzing anthocyanin developing area of rice leaf
CN116934752B (en) Glass detection method and system based on artificial intelligence
CN117522873A (en) Solar photovoltaic module production quality detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240605

Address after: No. 210-2, Ji'er Village, Xin'andu, Dongxihu District, Wuhan City, Hubei Province, 430000

Patentee after: Wuhan Hangda Packaging Co.,Ltd.

Country or region after: China

Address before: 430000 floor 2, workshop 5, yumenjing brigade, Cihui farm, Dongxihu District, Wuhan City, Hubei Province (8)

Patentee before: Wuhan Linshan industry and Trade Co.,Ltd.

Country or region before: China