CN115294158A - Hot continuous rolling strip steel image segmentation method based on machine vision - Google Patents

Hot continuous rolling strip steel image segmentation method based on machine vision Download PDF

Info

Publication number
CN115294158A
CN115294158A CN202211169519.3A CN202211169519A CN115294158A CN 115294158 A CN115294158 A CN 115294158A CN 202211169519 A CN202211169519 A CN 202211169519A CN 115294158 A CN115294158 A CN 115294158A
Authority
CN
China
Prior art keywords
pixel point
window
difference
clustering
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211169519.3A
Other languages
Chinese (zh)
Inventor
王芳芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Wansen Lvjian Fabricated Construction Co ltd
Original Assignee
Jiangsu Wansen Lvjian Fabricated Construction Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Wansen Lvjian Fabricated Construction Co ltd filed Critical Jiangsu Wansen Lvjian Fabricated Construction Co ltd
Priority to CN202211169519.3A priority Critical patent/CN115294158A/en
Publication of CN115294158A publication Critical patent/CN115294158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a hot continuous rolling strip steel image segmentation method based on machine vision, which is used for acquiring a gray image of hot continuous rolling strip steel and the gradient amplitude of each pixel point and detecting edge pixel points; selecting a preset number of edge pixel points as clustering centers, and acquiring the difference between each pixel point and the clustering centers; establishing a window by taking each pixel point as a central point, and acquiring the window surrounding degree of the pixel point according to the window characteristics of the pixel point and a clustering center; and taking the ratio of the difference degree and the window surrounding degree as the difference distance between the pixel point and the clustering center, constructing a fuzzy mean clustering target function based on the difference distance and a default membership matrix, and carrying out fuzzy mean clustering on the pixel points in the gray level image to obtain a segmented image of the hot continuous rolling strip steel. The invention can more finely divide the defect area in the surface image and improve the dividing precision.

Description

Hot continuous rolling strip steel image segmentation method based on machine vision
Technical Field
The invention relates to the technical field of image processing, in particular to a hot continuous rolling strip steel image segmentation method based on machine vision.
Background
The hot continuous rolling strip steel refers to a strip steel or a plate material produced by a hot pressing mode, and is a main product in the steel industry. The quality of the surface of the hot continuous rolling strip steel with high quality is good because the defects on the surface of the strip steel not only can damage the appearance of a plate workpiece, but also can reduce the performance of the strip steel and become a source for the breakage and corrosion of the strip steel. Therefore, how to avoid the generation of strip steel defects to the maximum extent and optimize the output quality of the finished strip steel products is an important problem facing the steel industry.
In the field of machine vision, the application of an image segmentation algorithm can realize accurate segmentation of an image so as to analyze the characteristics of the image. When the strip steel has tin pile defects, the tin piles are overlapped in an image due to uneven distribution of the tin piles, and the accurate division of the tin piles cannot be realized by an image segmentation algorithm based on pixel gray scale. Compared with hard clustering, the fuzzy C-means clustering algorithm (FCM) can divide color images more finely, but the target function of FCM cannot achieve a good dividing effect only by means of Euclidean distance, and the dividing precision is low.
Disclosure of Invention
In order to solve the problem of low segmentation precision of tin pile defects in hot continuous rolling strip steel images, the invention provides a hot continuous rolling strip steel image segmentation method based on machine vision, and the adopted technical scheme is as follows:
one embodiment of the invention provides a hot continuous rolling strip steel image segmentation method based on machine vision, which comprises the following steps:
collecting a surface image of the hot continuous rolling strip steel, acquiring a gray image of the surface image, and detecting edge pixel points by acquiring the gradient amplitude of each pixel point in the gray image;
selecting a preset number of edge pixel points as clustering centers, and acquiring the amplitude difference of each pixel point based on the gradient amplitude of each pixel point and the gradient amplitude of the clustering centers; obtaining the difference degree of each pixel point and the clustering center by combining the color difference of each pixel point and the clustering center and the corresponding amplitude difference;
constructing a window by taking each pixel point as a central point to obtain a window area of each pixel point; screening target points in a window area, generating a gray level co-occurrence matrix by the target points of each window, and calculating a corresponding inverse difference; taking the distance between each pixel point and the corresponding target point, the number of the target points, the gray value of the target point and the inverse difference as the window characteristics of the corresponding pixel point, and acquiring the window surrounding degree of the pixel point according to the window characteristics of the pixel point and the clustering center;
and taking the ratio of the difference degree and the window surrounding degree as the difference distance between the pixel point and the clustering center, constructing a fuzzy mean clustering target function based on the difference distance and a default membership matrix, and carrying out fuzzy mean clustering on the pixel points in the gray level image to obtain a segmentation image of the hot continuous rolling strip steel.
Preferably, the method for obtaining the amplitude difference comprises the following steps:
and acquiring the sum of the gradient amplitude of each pixel point and a preset constant, and taking the ratio of the gradient amplitude of the clustering center to the sum as the amplitude difference.
Preferably, the method for acquiring the difference degree comprises the following steps:
and converting the surface image into an LAB space to obtain an LAB value of each pixel point, calculating the difference value of each pixel point and the LAB value of the clustering center as the color difference, and performing weighted summation on the amplitude difference and the color difference to obtain the difference degree.
Preferably, the constructing a window with each pixel point as a central point includes:
and acquiring the corresponding window size according to the distance between each pixel point and the nearest edge pixel point, and constructing a window according to the corresponding window size by taking each pixel point as a central point.
Preferably, the screening of the target point in the window region includes:
and taking the center line and the diagonal line of the window through the center point of the window, and acquiring edge pixel points on the taken center line and the diagonal line as target points in the corresponding window area.
Preferably, the method for obtaining the window surrounding degree of the pixel point comprises the following steps:
acquiring the distance between a pixel point and each target point in the corresponding window area, calculating the sum of all distances corresponding to each pixel point as a target distance, and acquiring the ratio of the target distance corresponding to the pixel point to the target distance corresponding to the clustering center as a first index; calculating the ratio of the number of the target points in the window area corresponding to the pixel points to the number of the target points in the window area corresponding to the clustering center as a second index; calculating the ratio of the inverse difference corresponding to the pixel point to the inverse difference corresponding to the clustering center as a third index; calculating the ratio of the sum of the gray values of the target points in the window region corresponding to the pixel points to the sum of the gray values of the target points in the window region corresponding to the clustering center as a fourth index; and taking the product obtained by multiplying the sum of the first index, the second index and the third index by the fourth index as the window surrounding degree.
The embodiment of the invention at least has the following beneficial effects:
firstly, screening out clustering centers from edge pixel points detected in a surface image of hot continuous rolled strip steel, wherein the detected edge pixel points are likely to be defect positions, and the screening of the clustering centers from the edge pixel points is more likely to select the pixel points with defects as the clustering centers, so that the results obtained in the subsequent fuzzy clustering are more accurate; obtaining the difference degree between the pixel points and the clustering center based on the amplitude difference and the color difference, and reflecting the difference between each pixel point and the clustering center so as to facilitate subsequent clustering; establishing a window by taking each pixel point as a center, acquiring window characteristics of each pixel point, comparing the window characteristics with the window characteristics of a clustering center, judging whether more defects exist in a window area of each pixel point by comparing the window characteristics with the clustering center based on the characteristic that the clustering center is likely to be a defect position, and reflecting the possibility that each pixel point is a defect on the side; and then, the difference distance between the pixel point and the clustering center is obtained by utilizing the difference between the pixel point and the clustering center and the window surrounding degree, so that an objective function is constructed, the difference distance is obtained according to the characteristics expressed by the pixel point and the clustering center to replace the Euclidean distance in the existing fuzzy mean clustering, the fuzzy mean clustering is improved, so that a segmentation image is obtained, a defect area in the surface image can be more finely divided, and the segmentation precision is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating the steps of a method for segmenting an image of a hot continuous rolled strip steel based on machine vision according to an embodiment of the present invention;
FIG. 2 is a gray scale image of an acquired image of the surface of hot continuous rolling strip steel;
FIG. 3 is a schematic view of a window centered within a tin stack;
FIG. 4 is a schematic view of a window centered outside of a tin stack;
FIG. 5 is a segmented image obtained by the method of the present invention;
FIG. 6 is a segmented image obtained by an OTSU threshold segmentation algorithm;
FIG. 7 is a segmented image obtained by the original fuzzy mean clustering algorithm.
Detailed Description
In order to further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description, the structure, the features and the effects of the method for segmenting the hot continuous rolling strip steel image based on the machine vision according to the present invention are provided with reference to the accompanying drawings and the preferred embodiments. In the following description, the different references to "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the hot continuous rolling strip steel image segmentation method based on machine vision in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart illustrating steps of a method for segmenting an image of a hot continuous rolling strip steel based on machine vision according to an embodiment of the present invention is shown, the method including the following steps:
and S001, collecting a surface image of the hot continuous rolling strip steel, acquiring a gray image of the surface image, and detecting edge pixel points by acquiring the gradient amplitude of each pixel point in the gray image.
The method comprises the following specific steps:
the method comprises the steps of collecting a surface image of the hot continuous rolling strip steel, wherein the surface image is a color RGB image, carrying out graying to obtain a grayscale image as shown in figure 2, wherein tin pile defects exist in the grayscale image, and tin blocks with different shapes and sizes are adhered to the surface to form the tin pile defects due to the fact that partial tin impurities are lapped on the strip steel before the strip steel is subjected to reflow.
In order to prevent the influence of noise on the segmentation result and reduce the noise of the gray image, in the embodiment of the invention, gaussian filtering is used for filtering the noise, and then the OTSU algorithm is used for binarizing the gray image to obtain the binary image. And (3) extracting the edge of the binary image, wherein the method comprises the steps of carrying out edge detection by using a Sobel operator, obtaining the gradient amplitude of the image through convolution operation of the Sobel operator, and finishing the extraction of the edge, wherein pixel points on the extracted edge line are edge pixel points.
The process of filtering noise by using gaussian filtering, the OTSU algorithm and the edge detection by using a Sobel operator are all known technologies, and detailed descriptions of the process are omitted.
Step S002, selecting a preset number of edge pixel points as clustering centers, and acquiring the amplitude difference of each pixel point based on the gradient amplitude of each pixel point and the gradient amplitude of the clustering centers; and obtaining the difference degree of each pixel point and the clustering center by combining the color difference of each pixel point and the clustering center and the corresponding amplitude difference.
The method comprises the following specific steps:
in the gray image, the tin pile defect is formed by stacking tin blocks with boundaries and uneven internal texture and the peripheryThe impurity-free strip steel with smooth edges is in sharp contrast, the characteristic is described by adopting amplitude difference, the gradient amplitude represents the surface metal level difference between each pixel point and the adjacent position, the larger the gradient amplitude is, the coarser the corresponding pixel point is, and the edge pixel point is certainly coarse, so the gradient amplitude of the clustering center is larger. And acquiring the sum of the gradient amplitude of each pixel point and a preset constant, and taking the ratio of the gradient amplitude of the clustering center to the sum as the amplitude difference. The specific calculation formula is as follows:
Figure DEST_PATH_IMAGE001
wherein Q represents the amplitude difference of the pixel points;
Figure 127220DEST_PATH_IMAGE002
representing the gradient magnitude of the ith cluster center; a represents a preset constant, so that calculation errors caused by the denominator being 0 are prevented, and the value of a is small; g represents the gradient amplitude of the pixel point.
As an example, the value of the preset constant a in the embodiment of the present invention is 0.1.
The larger the value of the amplitude difference Q is, the more the pixel point deviates from the center of the cluster, the smaller the similarity between the pixel point and the center of the cluster is, and the more the pixel point is possibly at the pure strip steel.
Further, a difference between the pixel point and the tin-stacking cluster center is described on the other hand by using a color difference. In the collected surface image, the surface of the pure strip steel is gray black, and the position where the tin pile defect appears is unique white of the tin block. And converting the gray level image into an LAB space to obtain an LAB value of each pixel point, calculating the difference value of each pixel point and the LAB value of the clustering center as color difference, and performing weighted summation on the amplitude difference and the color difference to obtain the difference degree.
And converting the collected surface image of the hot continuous rolling strip steel into an LAB space, extracting the color of the surface image, and calculating the color difference C between a pixel point and a tin pile clustering center by using a CIEDE2000 formula. Wherein
Figure DEST_PATH_IMAGE003
Is the LAB value of the pixel point(s),
Figure 667791DEST_PATH_IMAGE004
the LAB value for the ith cluster center is calculated as:
Figure DEST_PATH_IMAGE005
the larger the value of the chromatic aberration C is, the larger the chromatic aberration between the pixel point and the cluster center is, and the smaller the similarity between the pixel point and the cluster center is.
When the color space is smaller, the similarity between the pixel point and the clustering center is higher. While the magnitude difference represents the degree of deviation of the roughness at the pixel point from the roughness at the cluster center. Weighting method is adopted to give emphasis to amplitude difference, and weight value is taken
Figure 138087DEST_PATH_IMAGE006
Weight value
Figure DEST_PATH_IMAGE007
. Synthesizing the above to construct a difference function
Figure 540249DEST_PATH_IMAGE008
To describe the degree of difference between the pixel point and the cluster center:
Figure 943549DEST_PATH_IMAGE010
difference function
Figure DEST_PATH_IMAGE011
The value of (a) represents the degree of difference between the pixel point and the cluster center,
Figure 655022DEST_PATH_IMAGE011
the larger the value of (a) indicates the larger the difference between the pixel point and the cluster center.
S003, constructing a window by taking each pixel point as a central point to obtain a window area of each pixel point; screening target points in a window area, generating a gray level co-occurrence matrix by the target points of each window, and calculating a corresponding inverse difference; and taking the distance between each pixel point and the corresponding target point, the number of the target points, the gray value of the target point and the inverse difference as the window characteristics of the corresponding pixel point, and acquiring the window surrounding degree of the pixel point according to the window characteristics of the pixel point and the clustering center.
For tin pile defects, a certain boundary is often formed to surround internal tin pile pixel points, so that the window surrounding degree is established to improve the judgment precision of the tin pile pixel points.
The method comprises the following specific steps:
and acquiring the corresponding window size according to the distance between each pixel point and the nearest edge pixel point, and constructing a window according to the corresponding window size by taking each pixel point as a central point to obtain a window area of each pixel point.
For each pixel point, taking the pixel point as a central point, expanding a window to enable the window to contain a nearby boundary, wherein the window size E is calculated by the following method:
Figure DEST_PATH_IMAGE013
wherein z represents the distance from the pixel point to the nearest edge pixel point; t is a window constant, and the function is to enable a larger range of containment boundaries if the previous term
Figure 612613DEST_PATH_IMAGE014
If the value of (A) is odd, the window constant T is 4, if the previous term is
Figure 84046DEST_PATH_IMAGE014
If the value of (3) is even, the window constant T is 3, the window size is ensured to be odd, and the calculation is convenient;
Figure DEST_PATH_IMAGE015
as a function of rounding.
The meaning of the window surrounding degree is the degree of the pixel points surrounded by the tin pile defects, namely the side face reflects the possibility that each pixel point is a defect. And taking the center point of the window as a center line and a diagonal line of the window, and acquiring edge pixel points on the center line and the diagonal line as target points in the corresponding window area. As shown in fig. 3 and 4, the curves in the figures represent the detected edge lines, and fig. 3 shows the case where the window is centered inside the tin stack, i.e. surrounded by tin stack defects; fig. 4 shows the case where the window center is outside the tin stack, i.e., not surrounded by tin stack defects. The intersection point of the line segment passing through the central point and the edge line in the image is an edge pixel point on the made central line and diagonal line, and the edge pixel point is used as a target point in the window area corresponding to the pixel point.
And for each window of each pixel point, constructing a gray level co-occurrence matrix according to the gray level value of the target point in the window area, wherein the gray level co-occurrence matrix represents the joint probability density distribution of two gray level values appearing simultaneously in the image, and the local density of the target point is analyzed by utilizing the gray level co-occurrence matrix. The gray level co-occurrence matrix is analyzed, and local texture changes are very small at a place with small density, and are large at a place with large density. Therefore, the inverse difference of the gray level co-occurrence matrix is calculated, and the inverse difference matrix is a mathematical expression of local density.
Acquiring the distance between a pixel point and each target point in the corresponding window area, calculating the sum of all distances corresponding to each pixel point as a target distance, and acquiring the ratio of the target distance corresponding to the pixel point to the target distance corresponding to the clustering center as a first index; calculating the ratio of the number of target points in the window region corresponding to the pixel points to the number of target points in the window region corresponding to the clustering center as a second index; calculating the ratio of the inverse difference corresponding to the pixel point to the inverse difference corresponding to the clustering center as a third index; calculating the ratio of the sum of the gray values of the target points in the window region corresponding to the pixel points to the sum of the gray values of the target points in the window region corresponding to the clustering center as a fourth index; and taking the product of the sum of the first exponent, the second exponent and the third exponent multiplied by the fourth exponent as the window surrounding degree.
When local density is considered, the similarity of the pixel points and the spatial positions of the clustering centers is expressed by comparing the number of target points in a window where the pixel points and the clustering centers are located and comparing the sum of linear distances between the center points and the target points. The window surrounding degrees of the pixel points and the clustering centers are comprehensively expressed by using the similarity of the spatial positions of the target points and the similarity of local density, so that the comparison between the pixel points and the clustering centers is more sufficient.
Namely, the first index is obtained based on the sum of linear distances between the pixel points and the target points in the window where the clustering center is located
Figure 59961DEST_PATH_IMAGE016
Wherein p represents the sum of the linear distances between the central point in the window region corresponding to the pixel point and the target point, and y represents the sum of the linear distances between the central point in the window region corresponding to the clustering center and the target point; obtaining a second index based on the number of the target points in the window where the pixel points and the clustering centers are located
Figure DEST_PATH_IMAGE017
Wherein, in the step (A),
Figure 958647DEST_PATH_IMAGE018
representing the number of object points within the window area corresponding to the pixel point,
Figure DEST_PATH_IMAGE019
representing the number of target points in a window area corresponding to the clustering center; obtaining a third index based on the local density of the target points in the window area where the pixel points and the clustering centers are located
Figure 669114DEST_PATH_IMAGE020
Wherein, IDM represents the inverse difference corresponding to the pixel point, and IDM (C) represents the inverse difference corresponding to the cluster center.
Further, the characteristic of adding white contrast is considered for the window surrounding degree of the pixel point and the cluster center. Since the tin pile is white, the clustering center as the tin pile is white, and the gray value of white is 255. The pixel points can cause the change of the gray value at the position of the target point due to different positions. And through counting the gray value conditions of the pixel points and the clustering centers corresponding to the target points, the window surrounding degrees of the pixel points and the clustering centers are described more three-dimensionally. I.e. the sum of the gray values of the target points in the window area corresponding to the pixel points and the clusterThe ratio of the sum of the gray values of the target points in the window region corresponding to the center is used as a fourth index
Figure DEST_PATH_IMAGE021
Wherein L represents the sum of the gray values of the target points in the window area corresponding to the pixel points,
Figure 413079DEST_PATH_IMAGE022
and representing the sum of the gray values of the target points in the window area corresponding to the clustering center.
And (3) integrating the four indexes to calculate the window surrounding degree:
Figure DEST_PATH_IMAGE023
the larger the value of the window surrounding degree TG is, the more similar the spatial distribution condition of the pixel point and the clustering center is, and the more likely the pixel point is surrounded by the edge pixel point.
And step S004, taking the ratio of the difference degree to the window surrounding degree as the difference distance between the pixel point and the clustering center, constructing a fuzzy mean clustering target function based on the difference distance and a default membership matrix, and performing fuzzy mean clustering on the pixel points in the gray level image to obtain a segmented image of the hot continuous rolling strip steel.
The method comprises the following specific steps:
the above parameters were analyzed: degree of difference between pixel point and cluster center
Figure 509080DEST_PATH_IMAGE008
The smaller the cluster is, the more likely the cluster belongs to the class to which the cluster center belongs when subsequent fuzzy mean clustering is performed; the larger the value of the window surrounding degree TG is, the larger the similarity with the clustering center is, and the overall difference function is obtained by integrating the window surrounding degree TG and the clustering center
Figure 578667DEST_PATH_IMAGE024
And expressing the difference distance, wherein the smaller the difference distance is, the more similar the pixel point is to the cluster center, and the more possible the pixel point belongs to the category to which the cluster center belongs.
Based on the expression of the value function in fuzzy mean clustering (FCM), the obtained difference distance
Figure DEST_PATH_IMAGE025
Substituting the Euclidean distance in the original value function into the value function of the FCM algorithm, and constructing a target function J of the fuzzy mean clustering based on the difference distance and the default membership matrix:
Figure DEST_PATH_IMAGE027
wherein m is weight, c is the number of classified cluster categories, n is the number of pixel points,
Figure 245272DEST_PATH_IMAGE028
as the degree of membership of the kth pixel point to the ith cluster center,
Figure DEST_PATH_IMAGE029
representing the differential distance between the pixel point and the cluster center.
Wherein the degree of membership
Figure 776616DEST_PATH_IMAGE028
The probability that the kth pixel belongs to the category of the ith clustering center is judged, in the strip steel surface image, the material composition of the area where the pixel is located may be pure strip steel without impurities or tin blocks in tin pile defects, the sum of the membership degrees of the categories of all the clustering centers of each pixel is 1, namely the constraint condition of the function is that
Figure 9015DEST_PATH_IMAGE030
The objective function J expresses the calculation of the membership matrix and the clustering center, and the embodiment of the invention emphasizes the influence of the membership on the FCM algorithm, so that m =3 is taken.
And (4) performing cyclic iteration by using an FCM algorithm, outputting a final clustering center and a membership matrix when the iteration is finished, wherein the clustering center can approximately represent the approximate position of the tin pile. Thus, each tin pile is divided in a cluster form, and the tin pile defect in the surface image is divided according to the obtained cluster center point to obtain a divided image of the hot continuous rolling strip steel, as shown in fig. 5.
Compared with the segmented image obtained by the traditional threshold segmentation, the segmented image obtained by the invention has higher segmentation precision, and as an example, the embodiment of the invention provides the segmented image obtained by the OTSU threshold segmentation algorithm, as shown in FIG. 6. In order to divide a color image more finely, a fuzzy mean clustering algorithm is introduced, but a segmentation image obtained by fuzzy mean clustering of a single distance is shown in fig. 7, the obtained segmentation result is obviously overfitting, and compared with a segmentation image obtained by an original fuzzy mean clustering algorithm and a segmentation image obtained by a classical threshold segmentation method OTSU, the segmentation image obtained by the improved fuzzy mean clustering algorithm has obvious superiority.
Further, the embodiment of the invention also comprises the following steps:
get quantity threshold based on prior
Figure DEST_PATH_IMAGE031
And area threshold
Figure 249503DEST_PATH_IMAGE032
Number of tin piles if the surface image of the strip steel
Figure DEST_PATH_IMAGE033
Area of simultaneous tin stack
Figure 324775DEST_PATH_IMAGE034
And outputting the strip steel as high-quality strip steel, or else, recovering and treating the strip steel as unqualified strip steel.
In summary, the embodiment of the present invention collects the surface image of the hot continuous rolling strip steel, obtains the grayscale image of the surface image, and detects the edge pixel points by obtaining the gradient amplitude of each pixel point in the grayscale image; selecting a preset number of edge pixel points as clustering centers, and acquiring the amplitude difference of each pixel point based on the gradient amplitude of each pixel point and the gradient amplitude of the clustering centers; obtaining the difference degree of each pixel point and the clustering center by combining the color difference of each pixel point and the clustering center and the corresponding amplitude difference; constructing a window by taking each pixel point as a central point to obtain a window area of each pixel point; screening target points in the window area, generating a gray level co-occurrence matrix by the target points of each window, and calculating corresponding inverse difference; taking the distance between each pixel point and the corresponding target point, the number of the target points, the gray value of the target point and the inverse difference as the window characteristics of the corresponding pixel point, and acquiring the window surrounding degree of the pixel point according to the window characteristics of the pixel point and the clustering center; and taking the ratio of the difference degree and the window surrounding degree as the difference distance between the pixel point and the clustering center, constructing a fuzzy mean clustering target function based on the difference distance and a default membership matrix, and carrying out fuzzy mean clustering on the pixel points in the gray level image to obtain a segmented image of the hot continuous rolling strip steel. The embodiment of the invention can more finely divide the defect area in the surface image and improve the dividing precision.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts in the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; modifications of the technical solutions described in the foregoing embodiments, or equivalents of some technical features thereof, are not essential to the spirit of the technical solutions of the embodiments of the present application, and are all included in the scope of the present application.

Claims (6)

1. A hot continuous rolling strip steel image segmentation method based on machine vision is characterized by comprising the following steps:
collecting a surface image of the hot continuous rolled strip steel, acquiring a gray image of the surface image, and detecting edge pixel points by acquiring the gradient amplitude of each pixel point in the gray image;
selecting a preset number of edge pixel points as clustering centers, and acquiring the amplitude difference of each pixel point based on the gradient amplitude of each pixel point and the gradient amplitude of the clustering centers; obtaining the difference degree of each pixel point and the clustering center by combining the color difference of each pixel point and the clustering center and the corresponding amplitude difference;
constructing a window by taking each pixel point as a central point to obtain a window area of each pixel point; screening target points in the window area, generating a gray level co-occurrence matrix by the target points of each window, and calculating corresponding inverse difference; taking the distance between each pixel point and the corresponding target point, the number of the target points, the gray value of the target point and the inverse difference as the window characteristics of the corresponding pixel point, and acquiring the window surrounding degree of the pixel point according to the window characteristics of the pixel point and the clustering center;
and taking the ratio of the difference degree and the window surrounding degree as the difference distance between the pixel point and the clustering center, constructing a fuzzy mean clustering target function based on the difference distance and a default membership matrix, and carrying out fuzzy mean clustering on the pixel points in the gray level image to obtain a segmentation image of the hot continuous rolling strip steel.
2. The hot continuous rolling strip steel image segmentation method based on the machine vision as claimed in claim 1, wherein the amplitude difference is obtained by:
and acquiring the sum of the gradient amplitude of each pixel point and a preset constant, and taking the ratio of the gradient amplitude of the clustering center to the sum as the amplitude difference.
3. The hot continuous rolling strip steel image segmentation method based on the machine vision is characterized in that the difference degree obtaining method comprises the following steps:
and converting the surface image into an LAB space to obtain an LAB value of each pixel point, calculating the difference value of each pixel point and the LAB value of the clustering center as the color difference, and performing weighted summation on the amplitude difference and the color difference to obtain the difference degree.
4. The method for segmenting the hot continuous rolling strip steel image based on the machine vision as claimed in claim 1, wherein the step of constructing the window by taking each pixel point as a central point comprises the following steps:
and acquiring the corresponding window size according to the distance between each pixel point and the nearest edge pixel point, and constructing a window according to the corresponding window size by taking each pixel point as a central point.
5. The hot continuous rolling strip steel image segmentation method based on the machine vision is characterized in that the step of screening target points in a window area comprises the following steps:
and taking the center point of the window as a center line and a diagonal line of the window, and acquiring edge pixel points on the center line and the diagonal line as target points in the corresponding window area.
6. The hot continuous rolling strip steel image segmentation method based on the machine vision as claimed in claim 1, wherein the method for obtaining the window surrounding degree of the pixel point is as follows:
acquiring the distance between a pixel point and each target point in the corresponding window area, calculating the sum of all distances corresponding to each pixel point as a target distance, and acquiring the ratio of the target distance corresponding to the pixel point to the target distance corresponding to the clustering center as a first index; calculating the ratio of the number of the target points in the window area corresponding to the pixel points to the number of the target points in the window area corresponding to the clustering center as a second index; calculating the ratio of the inverse difference corresponding to the pixel point to the inverse difference corresponding to the clustering center as a third index; calculating the ratio of the sum of the gray values of the target points in the window region corresponding to the pixel points to the sum of the gray values of the target points in the window region corresponding to the clustering center as a fourth index; and taking the product obtained by multiplying the sum of the first index, the second index and the third index by the fourth index as the window surrounding degree.
CN202211169519.3A 2022-09-26 2022-09-26 Hot continuous rolling strip steel image segmentation method based on machine vision Pending CN115294158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211169519.3A CN115294158A (en) 2022-09-26 2022-09-26 Hot continuous rolling strip steel image segmentation method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211169519.3A CN115294158A (en) 2022-09-26 2022-09-26 Hot continuous rolling strip steel image segmentation method based on machine vision

Publications (1)

Publication Number Publication Date
CN115294158A true CN115294158A (en) 2022-11-04

Family

ID=83834359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211169519.3A Pending CN115294158A (en) 2022-09-26 2022-09-26 Hot continuous rolling strip steel image segmentation method based on machine vision

Country Status (1)

Country Link
CN (1) CN115294158A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578732A (en) * 2022-11-21 2023-01-06 山东爱福地生物股份有限公司 Label identification method for fertilizer production line
CN115578389A (en) * 2022-12-08 2023-01-06 青岛澳芯瑞能半导体科技有限公司 Defect detection method of groove MOS device
CN115601362A (en) * 2022-12-14 2023-01-13 临沂农业科技职业学院(筹)(Cn) Welding quality evaluation method based on image processing
CN115861325A (en) * 2023-03-01 2023-03-28 山东中科冶金矿山机械有限公司 Suspension spring defect detection method and system based on image data
CN115861301A (en) * 2023-02-16 2023-03-28 山东百成新材料科技股份有限公司 Multi-material uniformity visual evaluation method for modified asphalt production
CN115937199A (en) * 2023-01-06 2023-04-07 山东济宁圣地电业集团有限公司 Spraying quality detection method for insulating layer of power distribution cabinet
CN115953774A (en) * 2023-03-08 2023-04-11 济宁安泰矿山设备制造有限公司 Alarm display digital identification method based on machine vision
CN116309575A (en) * 2023-05-19 2023-06-23 济宁众达利电气设备有限公司 Electric plug production quality detection method based on image processing
CN116342598A (en) * 2023-05-29 2023-06-27 天维云筑预应力科技(天津)有限公司 Steel strand quality detection method based on machine vision
CN116385433A (en) * 2023-06-02 2023-07-04 青岛宇通管业有限公司 Plastic pipeline welding quality assessment method
CN116402723A (en) * 2023-06-06 2023-07-07 国网山东省电力公司电力科学研究院 Ultraviolet imaging detection system of integrated robot platform
CN116563312A (en) * 2023-07-11 2023-08-08 山东古天电子科技有限公司 Method for dividing display image of double-screen machine
CN116580021A (en) * 2023-07-03 2023-08-11 湖南益友新材料有限公司 Environment-friendly concrete carbon reduction product production and quality detection method
CN116645368A (en) * 2023-07-27 2023-08-25 青岛伟东包装有限公司 Online visual detection method for edge curl of casting film
CN116664574A (en) * 2023-07-31 2023-08-29 山东罗斯夫新材料科技有限公司 Visual detection method for acrylic emulsion production wastewater
CN116883403A (en) * 2023-09-07 2023-10-13 山东国宏生物科技有限公司 Soybean quality detection method based on machine vision
CN117173703A (en) * 2023-11-02 2023-12-05 温州华嘉电器有限公司 Isolating switch state identification method
CN117237384A (en) * 2023-11-16 2023-12-15 潍坊科技学院 Visual detection method and system for intelligent agricultural planted crops
CN117274251A (en) * 2023-11-20 2023-12-22 山东鲁抗医药集团赛特有限责任公司 Tablet quality detection method in medicine production process based on image data
CN117351433A (en) * 2023-12-05 2024-01-05 山东质能新型材料有限公司 Computer vision-based glue-cured mortar plumpness monitoring system

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578732A (en) * 2022-11-21 2023-01-06 山东爱福地生物股份有限公司 Label identification method for fertilizer production line
CN115578389A (en) * 2022-12-08 2023-01-06 青岛澳芯瑞能半导体科技有限公司 Defect detection method of groove MOS device
CN115578389B (en) * 2022-12-08 2023-03-28 青岛澳芯瑞能半导体科技有限公司 Defect detection method of groove MOS device
CN115601362A (en) * 2022-12-14 2023-01-13 临沂农业科技职业学院(筹)(Cn) Welding quality evaluation method based on image processing
CN115937199A (en) * 2023-01-06 2023-04-07 山东济宁圣地电业集团有限公司 Spraying quality detection method for insulating layer of power distribution cabinet
CN115861301A (en) * 2023-02-16 2023-03-28 山东百成新材料科技股份有限公司 Multi-material uniformity visual evaluation method for modified asphalt production
CN115861325A (en) * 2023-03-01 2023-03-28 山东中科冶金矿山机械有限公司 Suspension spring defect detection method and system based on image data
CN115953774A (en) * 2023-03-08 2023-04-11 济宁安泰矿山设备制造有限公司 Alarm display digital identification method based on machine vision
CN116309575A (en) * 2023-05-19 2023-06-23 济宁众达利电气设备有限公司 Electric plug production quality detection method based on image processing
CN116342598A (en) * 2023-05-29 2023-06-27 天维云筑预应力科技(天津)有限公司 Steel strand quality detection method based on machine vision
CN116342598B (en) * 2023-05-29 2023-08-01 天维云筑预应力科技(天津)有限公司 Steel strand quality detection method based on machine vision
CN116385433A (en) * 2023-06-02 2023-07-04 青岛宇通管业有限公司 Plastic pipeline welding quality assessment method
CN116385433B (en) * 2023-06-02 2023-08-15 青岛宇通管业有限公司 Plastic pipeline welding quality assessment method
CN116402723B (en) * 2023-06-06 2023-08-22 国网山东省电力公司电力科学研究院 Ultraviolet imaging detection system of integrated robot platform
CN116402723A (en) * 2023-06-06 2023-07-07 国网山东省电力公司电力科学研究院 Ultraviolet imaging detection system of integrated robot platform
CN116580021B (en) * 2023-07-03 2023-09-22 湖南益友新材料有限公司 Environment-friendly concrete carbon reduction product production and quality detection method
CN116580021A (en) * 2023-07-03 2023-08-11 湖南益友新材料有限公司 Environment-friendly concrete carbon reduction product production and quality detection method
CN116563312A (en) * 2023-07-11 2023-08-08 山东古天电子科技有限公司 Method for dividing display image of double-screen machine
CN116563312B (en) * 2023-07-11 2023-09-12 山东古天电子科技有限公司 Method for dividing display image of double-screen machine
CN116645368A (en) * 2023-07-27 2023-08-25 青岛伟东包装有限公司 Online visual detection method for edge curl of casting film
CN116645368B (en) * 2023-07-27 2023-10-03 青岛伟东包装有限公司 Online visual detection method for edge curl of casting film
CN116664574B (en) * 2023-07-31 2023-10-20 山东罗斯夫新材料科技有限公司 Visual detection method for acrylic emulsion production wastewater
CN116664574A (en) * 2023-07-31 2023-08-29 山东罗斯夫新材料科技有限公司 Visual detection method for acrylic emulsion production wastewater
CN116883403A (en) * 2023-09-07 2023-10-13 山东国宏生物科技有限公司 Soybean quality detection method based on machine vision
CN116883403B (en) * 2023-09-07 2023-11-14 山东国宏生物科技有限公司 Soybean quality detection method based on machine vision
CN117173703A (en) * 2023-11-02 2023-12-05 温州华嘉电器有限公司 Isolating switch state identification method
CN117173703B (en) * 2023-11-02 2024-01-16 温州华嘉电器有限公司 Isolating switch state identification method
CN117237384A (en) * 2023-11-16 2023-12-15 潍坊科技学院 Visual detection method and system for intelligent agricultural planted crops
CN117237384B (en) * 2023-11-16 2024-02-02 潍坊科技学院 Visual detection method and system for intelligent agricultural planted crops
CN117274251A (en) * 2023-11-20 2023-12-22 山东鲁抗医药集团赛特有限责任公司 Tablet quality detection method in medicine production process based on image data
CN117274251B (en) * 2023-11-20 2024-02-06 山东鲁抗医药集团赛特有限责任公司 Tablet quality detection method in medicine production process based on image data
CN117351433A (en) * 2023-12-05 2024-01-05 山东质能新型材料有限公司 Computer vision-based glue-cured mortar plumpness monitoring system
CN117351433B (en) * 2023-12-05 2024-02-23 山东质能新型材料有限公司 Computer vision-based glue-cured mortar plumpness monitoring system

Similar Documents

Publication Publication Date Title
CN115294158A (en) Hot continuous rolling strip steel image segmentation method based on machine vision
CN114972357B (en) Roller surface defect detection method and system based on image processing
CN115082467B (en) Building material welding surface defect detection method based on computer vision
CN115330783A (en) Steel wire rope defect detection method
CN116205919B (en) Hardware part production quality detection method and system based on artificial intelligence
CN110443128B (en) Finger vein identification method based on SURF feature point accurate matching
CN102426649B (en) Simple steel seal digital automatic identification method with high accuracy rate
CN110472479B (en) Finger vein identification method based on SURF feature point extraction and local LBP coding
CN113963042B (en) Metal part defect degree evaluation method based on image processing
CN115049835A (en) Data preprocessing method based on die-casting die defect identification
CN115314714B (en) Data compression method for weld image storage
CN104463199A (en) Rock fragment size classification method based on multiple features and segmentation recorrection
CN113870235A (en) Method for detecting defects of circular stamping part based on quantum firework arc edge extraction
CN115578374A (en) Mechanical part casting quality evaluation method and system
CN115131359B (en) Method for detecting pitting defects on surface of metal workpiece
CN117197140B (en) Irregular metal buckle forming detection method based on machine vision
CN113610850B (en) Decorative paper texture abnormity detection method based on image processing
CN116703251B (en) Rubber ring production quality detection method based on artificial intelligence
CN115049657A (en) Glass defect detection method
CN115063430A (en) Electric pipeline crack detection method based on image processing
CN115274486B (en) Semiconductor surface defect identification method
CN114119603A (en) Image processing-based snack box short shot defect detection method
CN115131356A (en) Steel plate defect classification method based on richness
CN117689655B (en) Metal button surface defect detection method based on computer vision
CN114549525A (en) Industrial image detection method based on improved canny algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination